text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Wellposedness of the Keller-Segel Navier-Stokes equations in the critical Besov spaces
Least energy solutions for an elliptic problem involving sublinear term and peaking phenomenon
November 2015, 14(6): 2431-2451. doi: 10.3934/cpaa.2015.14.2431
Regularity and nonexistence of solutions for a system involving the fractional Laplacian
De Tang 1, and Yanqin Fang 1,
School of Mathematics, Hunan University, Changsha, 410082, China
Received March 2015 Revised July 2015 Published September 2015
We consider a system involving the fractional Laplacian \begin{eqnarray} \left\{ \begin{array}{ll} (-\Delta)^{\alpha_{1}/2}u=u^{p_{1}}v^{q_{1}} & \mbox{in}\ \mathbb{R}^N_+,\\ (-\Delta)^{\alpha_{2}/2}v=u^{p_{2}}v^{q_{2}} &\mbox{in}\ \mathbb{R}^N_+,\\ u=v=0,&\mbox{in}\ \mathbb{R}^N\backslash\mathbb{R}^N_+, \end{array} \right. \end{eqnarray} where $\alpha_{i}\in (0,2)$, $p_{i},q_{i}>0$, $i=1,2$. Based on the uniqueness of $\alpha$-harmonic function [9] on half space, the equivalence between (1) and integral equations \begin{eqnarray} \left\{ \begin{array}{ll} u(x)=C_{1}x_{N}^{\frac{\alpha_{1}}{2}}+\displaystyle\int_{\mathbb{R}_{+}^{N}}G^{1}_{\infty}(x,y)u^{p_{1}}(y)v^{q_{1}}(y)dy,\\ v(x)=C_{2}x_{N}^{\frac{\alpha_{2}}{2}}+\displaystyle\int_{\mathbb{R}_{+}^{N}}G^{2}_{\infty}(x,y)u^{p_{2}}(y)v^{q_{2}}(y)dy. \end{array} \right. \end{eqnarray} are derived. Based on this result we deal with integral equations (2) instead of (1) and obtain the regularity. Especially, by the method of moving planes in integral forms which is established by Chen-Li-Ou [12], we obtain the nonexistence of positive solutions of integral equations (2) under only local integrability assumptions.
Keywords: Kelvin transform., moving planes in integral forms, nonexistence, integral equations, equivalence, Fractional laplacians.
Mathematics Subject Classification: Primary: 35R11; Secondary: 35A01, 35B5.
Citation: De Tang, Yanqin Fang. Regularity and nonexistence of solutions for a system involving the fractional Laplacian. Communications on Pure & Applied Analysis, 2015, 14 (6) : 2431-2451. doi: 10.3934/cpaa.2015.14.2431
J. Betoin, Lévy Processes,, Cambridge Tracts in Mathematics, (1996). Google Scholar
H. Berestycki and L. Nirenberg, On the method of moving planes and sliding method,, \emph{Bol. Soc. Brazil. Mat. }(N. S.), 22 (1991), 1. doi: 10.1007/BF01244896. Google Scholar
G. Bianchi, Non-existence of positive solutions to semilinear elliptic equations in $R^N$ and $R_{+}^N$ through the method of moving plane,, \emph{Comm. PDE.}, 22 (1997), 1671. doi: 10.1080/03605309708821315. Google Scholar
K. Bogdan, The boundary Harnack principle for the fractional Laplacian,, \emph{Studia Math.}, 123 (1997), 43. Google Scholar
X. Cabré and Y. Sire, Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates,, \emph{Ann. I. H. Poincar\'e-AN.}, 31 (2014), 23. doi: 10.1016/j.anihpc.2013.02.001. Google Scholar
X. Cabré and J. Solà-Morales, Layer solutions in a half-space for boundary reactions,, \emph{Comm. Pure Appl. Math.}, 58 (2005), 1678. doi: 10.1002/cpa.20093. Google Scholar
L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian,, \emph{Comm. in PDE.}, 32 (2007), 1245. doi: 10.1080/03605300600987306. Google Scholar
L. Cao and W. Chen, Liouville type theorems for poly-harmonic Navier problems,, \emph{Disc. Cont. Dyn. Sys.}, 33 (2013), 3937. doi: 10.3934/dcds.2013.33.3937. Google Scholar
W. Chen, Y. Fang and R. Yang, Liouville theorems involving the fractional Laplacian on a half space,, \emph{Advances in Mathematics}, 274 (2015), 167. doi: 10.1016/j.aim.2014.12.013. Google Scholar
W. Chen and C. Li, Regularity of solutions for a system of integral equation,, \emph{Comm. Pure Appl. Anal.}, 4 (2005), 1. Google Scholar
W. Chen and C. Li, Methods on Nonlinear Elliptic Equations,, AIMS. Ser. Differ. Equ. Dyn. Syst., 4 (2010). Google Scholar
W. Chen, C. Jin, C. Li and J. Lim, Weighted Hardy-Littlewood-Sobolev inequlaities and systems of integral equations,, \emph{Discrete Contin. Dyn. Syst.}, (2005), 164. Google Scholar
R. Cont and P. Tankov, Financial Modelling With Jump Process,, Chapman and Hall/CRC Financial Mathematics Series, (2004). Google Scholar
G. Duvaut and J.-L. Lions, Inequalities In Mechanics and Physics,, Springer-Verlag, (1976). Google Scholar
Y. Fang and W. Chen, A Liouville type theorem for poly-harmonic Dirichlet problems in a half space,, \emph{Adv. Math.}, 229 (2012), 2835. doi: 10.1016/j.aim.2012.01.018. Google Scholar
P. Felmer and A. Quaas, Fundamental solutions and Liouville type properties for nonlinear integral operator,, \emph{Adv. Math.}, 226 (2011), 2712. doi: 10.1016/j.aim.2010.09.023. Google Scholar
M. Moustapha Fall and T. Weth, Monotonicity and nonexistence results for some fractional elliptic problems in the half space,, Available online at \emph{http://arxiv.org/abs/1309.7230}., (). Google Scholar
M. Moustapha Fall and T. Weth, Nonexistence results for a class of fractional elliptic boundary value problems,, \emph{J. Funct. Anal.}, 263 (2012), 2205. doi: 10.1016/j.jfa.2012.06.018. Google Scholar
R. Metzler and J. Klafter, The random walk's guide to anomalous diffusion: A fractional dynamics approach,, \emph{Phys. Rep.}, 339 (2000), 1. doi: 10.1016/S0370-1573(00)00070-3. Google Scholar
E. Milakis and L. Silvestre, Regularity for the nonlinear Signorini problem,, \emph{Adv. Math.}, 217 (2008), 1301. doi: 10.1016/j.aim.2007.08.009. Google Scholar
A. Quaas and B. Sirakov, Existence and nonexistence results for fully nonlinear elliptic systems,, \emph{Indiana Univ. Math. J.}, 58 (2009), 751. doi: 10.1512/iumj.2009.58.3501. Google Scholar
A. Quaas and A. Xia, Liouville type theorems for nonlinear elliptic equations and systems involving fractional Laplacian in the half space,, \emph{Calc. Var. Partial Differential Equations}, 52 (2015), 641. doi: 10.1007/s00526-014-0727-8. Google Scholar
T. Kulczycki, Properties of Green function of symmetric stable processed,, \emph{Probability and Mathematical Statistics}, 17 (1997), 339. Google Scholar
L. Silvestre, Regularity of the obstacle problem for the fractional power of the Laplace operator,, \emph{Comm. Pure Appl. Math.}, 60 (2007), 67. doi: 10.1002/cpa.20153. Google Scholar
Y. Sire and E. Valdinoci, Fractional Laplacian phase transitions and boundary reactions: a geometric inequality and a symmetry result,, \emph{J. Funct. Anal.}, 256 (2009), 1842. doi: 10.1016/j.jfa.2009.01.020. Google Scholar
Dongyan Li, Yongzhong Wang. Nonexistence of positive solutions for a system of integral equations on $R^n_+$ and applications. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2601-2613. doi: 10.3934/cpaa.2013.12.2601
Stanisław Migórski, Shengda Zeng. The Rothe method for multi-term time fractional integral diffusion equations. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 719-735. doi: 10.3934/dcdsb.2018204
Natalia Skripnik. Averaging of fuzzy integral equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1999-2010. doi: 10.3934/dcdsb.2017118
Wu Chen, Zhongxue Lu. Existence and nonexistence of positive solutions to an integral system involving Wolff potential. Communications on Pure & Applied Analysis, 2016, 15 (2) : 385-398. doi: 10.3934/cpaa.2016.15.385
Dorina Mitrea and Marius Mitrea. Boundary integral methods for harmonic differential forms in Lipschitz domains. Electronic Research Announcements, 1996, 2: 92-97.
William Rundell. Recovering an obstacle using integral equations. Inverse Problems & Imaging, 2009, 3 (2) : 319-332. doi: 10.3934/ipi.2009.3.319
Changlu Liu, Shuangli Qiao. Symmetry and monotonicity for a system of integral equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1925-1932. doi: 10.3934/cpaa.2009.8.1925
Wenxiong Chen, Congming Li. Regularity of solutions for a system of integral equations. Communications on Pure & Applied Analysis, 2005, 4 (1) : 1-8. doi: 10.3934/cpaa.2005.4.1
Patricia J.Y. Wong. Existence of solutions to singular integral equations. Conference Publications, 2009, 2009 (Special) : 818-827. doi: 10.3934/proc.2009.2009.818
Roman Chapko, B. Tomas Johansson. Integral equations for biharmonic data completion. Inverse Problems & Imaging, 2019, 13 (5) : 1095-1111. doi: 10.3934/ipi.2019049
Xiaohui Yu. Liouville type theorems for singular integral equations and integral systems. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1825-1840. doi: 10.3934/cpaa.2016017
Wei Dai, Jiahui Huang, Yu Qin, Bo Wang, Yanqin Fang. Regularity and classification of solutions to static Hartree equations involving fractional Laplacians. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1389-1403. doi: 10.3934/dcds.2018117
Zupei Shen, Zhiqing Han, Qinqin Zhang. Ground states of nonlinear Schrödinger equations with fractional Laplacians. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2115-2125. doi: 10.3934/dcdss.2019136
Yanqin Fang, Jihui Zhang. Nonexistence of positive solution for an integral equation on a Half-Space $R_+^n$. Communications on Pure & Applied Analysis, 2013, 12 (2) : 663-678. doi: 10.3934/cpaa.2013.12.663
M. R. Arias, R. Benítez. Properties of solutions for nonlinear Volterra integral equations. Conference Publications, 2003, 2003 (Special) : 42-47. doi: 10.3934/proc.2003.2003.42
Diogo A. Gomes, Gabriele Terrone. Bernstein estimates: weakly coupled systems and integral equations. Communications on Pure & Applied Analysis, 2012, 11 (3) : 861-883. doi: 10.3934/cpaa.2012.11.861
Nakao Hayashi, Tohru Ozawa. Schrödinger equations with nonlinearity of integral type. Discrete & Continuous Dynamical Systems - A, 1995, 1 (4) : 475-484. doi: 10.3934/dcds.1995.1.475
De Tang Yanqin Fang | CommonCrawl |
\begin{definition}[Definition:Categorical Syllogism/Premises/Major Premise]
The '''major premise''' of a categorical syllogism is conventionally stated first.
It is a categorical statement which expresses the logical relationship between the primary term and the middle term of the syllogism.
\end{definition} | ProofWiki |
Earth Science Stack Exchange is a question and answer site for those interested in the geology, meteorology, oceanography, and environmental sciences. It only takes a minute to sign up.
Regarding the theory of the origin of water on Earth through meteorites, why wouldn't the water evaporate on impact?
Water on earth has been theorized to have come through comets trapped inside crystals. But why wouldn't that water evaporate on impact, and wouldn't the atmosphere at that time allow the vapours to escape Earth?
Also, what is the current scientific opinion of the validity of this theory ?
earth-history meteorite planetary-formation
DaudDaud
$\begingroup$ Also, there is water in our mantle, as water can be stored within the rocks at the molecular/lattice level. I don't know enough about impacts for a full answer, but it certainly MIGHT be possible that the water simply could not escape the lattice on impact. $\endgroup$
– Neo
$\begingroup$ @Neo You might be thinking of meteoroids/asteroids. Comets would hold their water almost entirely as ice. --- Oh, I just noticed the title does not match the text. $\endgroup$
– Eubie Drew
why wouldn't that water evaporate on impact, and wouldn't the atmosphere at that time allow the vapours to escape Earth?
The water would very likely evaporate on impact.
However, gravity would prevent the gas phase water molecules from leaving Earth.
The speed of a water molecule must be compared to the escape velocity of Earth (11 km/s) to determine whether or not the molecule can escape.
At a given temperature, the velocity of water molecules will be governed by a Boltzmann distribution.
The most probable velocity of a molecule will be:
$$V= \sqrt{\frac{2kT}{m}}$$
where $m$ is the mass of the molecule and $k$ is Boltzmann's constant.
For example, at a temperature of 300 K, a water molecule will have a most probable velocity of 520 m/s, about a factor of 20 below the escape velocity.
what is the current scientific opinion of the validity of this theory?
Comets have a higher fraction of deuterium in their water compared to Earth.
According to "Earth's water probably didn't come from comets, Caltech researchers say", this refutes the hypothesis that a large fraction of Earth's water came from comets.
senshin
DavePhDDavePhD
Thanks for contributing an answer to Earth Science Stack Exchange!
Not the answer you're looking for? Browse other questions tagged earth-history meteorite planetary-formation or ask your own question.
Are the magnetic elements of meteorites that have struck Earth aligned with the magnetic field?
If we assume the mega impact hypothesis for the formation of Moon, where on Earth is the impact point?
Earth and moon have different mantle compositions. Is this a fatal flaw in the 'Giant Impact hypothesis'?
Before the Great Oxygenation Event, where was the oxygen?
What was the percentage of land mass in prehistoric times when temperatures were high enough that we had no ice caps?
Does the heat of reentry affect the reliability of radiometric dating of meteorites? | CommonCrawl |
Open maps: Small and large holes with unusual properties
$L^p$ mapping properties for nonlocal Schrödinger operators with certain potentials
November 2018, 38(11): 5835-5881. doi: 10.3934/dcds.2018254
Multiplicity and concentration results for some nonlinear Schrödinger equations with the fractional p-Laplacian
Vincenzo Ambrosio 1, and Teresa Isernia 2,
Department of Mathematics, EPFL SB CAMA, Station 8 CH-1015 Lausanne, Switzerland
Dipartimento di Ingegneria Industriale e Scienze Matematiche, Università Politecnica delle Marche, Via Brecce Bianche, 12, 60131 Ancona, Italy
Received February 2018 Revised July 2018 Published August 2018
We consider a class of parametric Schrödinger equations driven by the fractional $p$-Laplacian operator and involving continuous positive potentials and nonlinearities with subcritical or critical growth. Using variational methods and Ljusternik-Schnirelmann theory, we study the existence, multiplicity and concentration of positive solutions for small values of the parameter.
Keywords: Fractional Schrödinger equation, fractional $p$-Laplacian operator, Nehari manifold, Ljusternik-Schnirelmann theory, critical growth.
Mathematics Subject Classification: Primary: 47G20, 35R11; Secondary: 35A15, 58E05.
Citation: Vincenzo Ambrosio, Teresa Isernia. Multiplicity and concentration results for some nonlinear Schrödinger equations with the fractional p-Laplacian. Discrete & Continuous Dynamical Systems - A, 2018, 38 (11) : 5835-5881. doi: 10.3934/dcds.2018254
C. O. Alves, Existence of positive solutions for a problem with lack of compactness involving the p-Laplacian, Nonlinear Anal., 51 (2002), 1187-1206. doi: 10.1016/S0362-546X(01)00887-2. Google Scholar
C. O. Alves and V. Ambrosio, A multiplicity result for a nonlinear fractional Schrödinger equation in $\mathbb{R}^{N}$ without the Ambrosetti-Rabinowitz condition, J. Math. Anal. Appl., 466 (2018), 498-522. doi: 10.1016/j.jmaa.2018.06.005. Google Scholar
C. O. Alves and G. M. Figueiredo, Existence and multiplicity of positive solutions to a p-Laplacian equation in $\mathbb{R}^{N}$, Differential Integral Equations, 19 (2006), 143-162. Google Scholar
C. O. Alves and O. H. Miyagaki, Existence and concentration of solution for a class of fractional elliptic equation in $\mathbb{R}^{N}$ via penalization method, Calc. Var. Partial Differential Equations, 55 (2016), Art. 47, 19 pp. doi: 10.1007/s00526-016-0983-x. Google Scholar
V. Ambrosio, Multiple solutions for a fractional p-Laplacian equation with sign-changing potential, Electron. J. Diff. Equ., 2016 (2016), Paper No. 151, 12 pp. Google Scholar
V. Ambrosio, Multiplicity of positive solutions for a class of fractional Schrödinger equations via penalization method, Ann. Mat. Pura Appl.(4), 196 (2017), 2043-2062. doi: 10.1007/s10231-017-0652-5. Google Scholar
V. Ambrosio, Concentration phenomena for critical fractional Schrödinger systems, Commun. Pure Appl. Anal., 17 (2018), 2085-2123. doi: 10.3934/cpaa.2018099. Google Scholar
V. Ambrosio, Concentrating solutions for a class of nonlinear fractional Schrödinger equations in $\mathbb{R}^{N}$, Rev. Mat. Iberoam., (in press), arXiv:1612.02388. Google Scholar
V. Ambrosio and G. M. Figueiredo, Ground state solutions for a fractional Schrödinger equation with critical growth, Asymptot. Anal., 105 (2017), 159-191. doi: 10.3233/ASY-171438. Google Scholar
V. Ambrosio and T. Isernia, Concentration phenomena for a fractional Schrödinger-Kirchhoff type problem, Math. Methods Appl. Sci., 41 (2018), 615-645. Google Scholar
P. Belchior, H. Bueno, O. H. Miyagaki and G. A. Pereira, Remarks about a fractional Choquard equation: Ground state, regularity and polynomial decay, Nonlinear Analysis, 164 (2017), 38-53. doi: 10.1016/j.na.2017.08.005. Google Scholar
V. Benci and G. Cerami, Multiple positive solutions of some elliptic problems via the Morse theory and the domain topology, Calc. Var. Partial Differential Equations, 2 (1994), 29-48. doi: 10.1007/BF01234314. Google Scholar
L. Brasco, S. Mosconi and M. Squassina, Optimal decay of extremals for the fractional Sobolev inequality, Calc. Var. Partial Differential Equations, 55 (2016), Art. 23, 32 pp. doi: 10.1007/s00526-016-0958-y. Google Scholar
H. Brézis and E. Lieb, A relation between pointwise convergence of functions and convergence of functionals, Proc. Amer. Math. Soc., 88 (1983), 486-490. doi: 10.2307/2044999. Google Scholar
L. Caffarelli and L. Silvestre, An extension problem related to the fractional Laplacian, Comm. Partial Differential Equations, 32 (2007), 1245-1260. doi: 10.1080/03605300600987306. Google Scholar
L. M. Del Pezzo and A. Quaas, A Hopf's lemma and a strong minimum principle for the fractional p-Laplacian, J. Differential Equations, 263 (2017), 765-778. doi: 10.1016/j.jde.2017.02.051. Google Scholar
A. Di Castro, T. Kuusi and G. Palatucci, Local behavior of fractional p-minimizers, Ann. Inst. H. Poincaré Anal. Non Linéaire, 33 (2016), 1279-1299. doi: 10.1016/j.anihpc.2015.04.003. Google Scholar
E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar
Y. Ding, Variational Methods for Strongly Indefinite Problems, Interdisciplinary Mathematical Sciences, 7. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2007. ⅷ+168 pp. doi: 10.1142/9789812709639. Google Scholar
S. Dipierro, M. Medina and E. Valdinoci, Fractional Elliptic Problems with Critical Growth in the Whole of $\mathbb{R}^{n}$. Appunti. Scuola Normale Superiore di Pisa (Nuova Serie) [Lecture Notes. Scuola Normale Superiore di Pisa (New Series)], 15. Edizioni della Normale, Pisa, 2017. ⅷ+152 pp. doi: 10.1007/978-88-7642-601-8. Google Scholar
I. Ekeland, On the variational principle, J. Math. Anal. Appl., 47 (1974), 324-353. doi: 10.1016/0022-247X(74)90025-0. Google Scholar
P. Felmer, A. Quaas and J. Tan, Positive solutions of the nonlinear Schrödinger equation with the fractional Laplacian, Proc. Roy. Soc. Edinburgh Sect. A, 142 (2012), 1237-1262. doi: 10.1017/S0308210511000746. Google Scholar
G. M. Figueiredo, Existence, multiplicity and concentration of positive solutions for a class of quasilinear problems with critical growth, Comm. Appl. Nonlinear Anal., 13 (2006), 79-99. Google Scholar
G. M. Figueiredo and G. Siciliano, A multiplicity result via Ljusternick-Schnirelmann category and Morse theory for a fractional Schrödinger equation in $\mathbb{R}^{N}$, NoDEA Nonlinear Differential Equations Appl., 23 (2016), Art. 12, 22 pp. doi: 10.1007/s00030-016-0355-4. Google Scholar
A. Fiscella and P. Pucci, Kirchhoff-Hardy fractional problems with lack of compactness, Adv. Nonlinear Stud., 17 (2017), 429-456. doi: 10.1515/ans-2017-6021. Google Scholar
G. Franzina and G. Palatucci, Fractional p-eigenvalues, Riv. Math. Univ. Parma (N.S.), 5 (2014), 373-386. Google Scholar
A. Iannizzotto, S. Mosconi and M. Squassina, Global Hölder regularity for the fractional p-Laplacian, Rev. Mat. Iberoam., 32 (2016), 1353-1392. doi: 10.4171/RMI/921. Google Scholar
T. Isernia, Positive solution for nonhomogeneous sublinear fractional equations in $\mathbb{R}^{N}$, Complex Var. Elliptic Equ., 63 (2018), 689-714. doi: 10.1080/17476933.2017.1332052. Google Scholar
N. Laskin, Fractional quantum mechanics and Lévy path integrals, Phys. Lett. A, 268 (2000), 298-305. doi: 10.1016/S0375-9601(00)00201-2. Google Scholar
N. Laskin, Fractional Schrödinger equation, Phys. Rev. E, 66 (2002), 056108, 7pp. doi: 10.1103/PhysRevE.66.056108. Google Scholar
E. Lindgren and P. Lindqvist, Fractional eigenvalues, Calc. Var. Partial Differential Equations, 49 (2014), 795-826. doi: 10.1007/s00526-013-0600-1. Google Scholar
J. Mawhin and M. Willem, Critical Point Theory and Hamiltonian Systems, Springer-Verlag, 1989. doi: 10.1007/978-1-4757-2061-7. Google Scholar
C. Mercuri and M. Willem, A global compactness result for the p-Laplacian involving critical nonlinearities, Discrete Contin. Dyn. Syst., 28 (2010), 469-493. doi: 10.3934/dcds.2010.28.469. Google Scholar
G. Molica Bisci, V. Rădulescu and R. Servadei, Variational Methods for Nonlocal Fractional Problems, Cambridge University Press, 162, Cambridge, 2016. doi: 10.1017/CBO9781316282397. Google Scholar
S. Mosconi, K. Perera, M. Squassina and Y. Yang, The Brezis-Nirenberg problem for the fractional p-Laplacian, Calc. Var. Partial Differential Equations, 55 (2016), Art. 105, 25 pp. doi: 10.1007/s00526-016-1035-2. Google Scholar
J. Moser, A new proof of De Giorgi's theorem concerning the regularity problem for elliptic differential equations, Comm. Pure Appl. Math., 13 (1960), 457-468. doi: 10.1002/cpa.3160130308. Google Scholar
G. Palatucci and A. Pisante, Improved Sobolev embeddings, profile decomposition, and concentration-compactness for fractional Sobolev spaces, Calc. Var. Partial Differential Equations, 50 (2014), 799-829. doi: 10.1007/s00526-013-0656-y. Google Scholar
P. Rabinowitz, On a class of nonlinear Schrödinger equations, Z. Angew. Math. Phys., 43 (1992), 270-291. doi: 10.1007/BF00946631. Google Scholar
S. Secchi, Ground state solutions for nonlinear fractional Schrödinger equations in $\mathbb{R}^{N}$, J. Math. Phys., 54 (2013), 031501, 17pp. doi: 10.1063/1.4793990. Google Scholar
X. Shang, J. Zhang and Y. Yang, On fractional Schrödinger equations with critical growth, J. Math. Phys., 54 (2013), 121502, 20 pp. doi: 10.1063/1.4835355. Google Scholar
A. Szulkin and T. Weth, The method of Nehari manifold, in Handbook of Nonconvex Analysis and Applications, edited by D. Y. Gao and D. Montreanu, International Press, Boston, 2010,597–632. Google Scholar
J. Zhang, Stability of standing waves for nonlinear Schrödinger equations with unbounded potentials, Z. Angew. Math. Phys., 51 (2000), 498-503. doi: 10.1007/PL00001512. Google Scholar
Qingfang Wang. The Nehari manifold for a fractional Laplacian equation involving critical nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2261-2281. doi: 10.3934/cpaa.2018108
Miaomiao Niu, Zhongwei Tang. Least energy solutions for nonlinear Schrödinger equation involving the fractional Laplacian and critical growth. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3963-3987. doi: 10.3934/dcds.2017168
Xiaoming He, Marco Squassina, Wenming Zou. The Nehari manifold for fractional systems involving critical nonlinearities. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1285-1308. doi: 10.3934/cpaa.2016.15.1285
Vincenzo Ambrosio. Concentration phenomena for critical fractional Schrödinger systems. Communications on Pure & Applied Analysis, 2018, 17 (5) : 2085-2123. doi: 10.3934/cpaa.2018099
Van Duong Dinh. On blow-up solutions to the focusing mass-critical nonlinear fractional Schrödinger equation. Communications on Pure & Applied Analysis, 2019, 18 (2) : 689-708. doi: 10.3934/cpaa.2019034
Ran Zhuo, Yan Li. Nonexistence and symmetry of solutions for Schrödinger systems involving fractional Laplacian. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1595-1611. doi: 10.3934/dcds.2019071
Lun Guo, Wentao Huang, Huifang Jia. Ground state solutions for the fractional Schrödinger-Poisson systems involving critical growth in $ \mathbb{R} ^{3} $. Communications on Pure & Applied Analysis, 2019, 18 (4) : 1663-1693. doi: 10.3934/cpaa.2019079
Kaimin Teng, Xiumei He. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent. Communications on Pure & Applied Analysis, 2016, 15 (3) : 991-1008. doi: 10.3934/cpaa.2016.15.991
Maoding Zhen, Jinchun He, Haoyun Xu. Critical system involving fractional Laplacian. Communications on Pure & Applied Analysis, 2019, 18 (1) : 237-253. doi: 10.3934/cpaa.2019013
A. Pankov. Gap solitons in periodic discrete nonlinear Schrödinger equations II: A generalized Nehari manifold approach. Discrete & Continuous Dynamical Systems - A, 2007, 19 (2) : 419-430. doi: 10.3934/dcds.2007.19.419
Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2105-2123. doi: 10.3934/cpaa.2017104
Xudong Shang, Jihui Zhang. Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2239-2259. doi: 10.3934/cpaa.2018107
Patricio Felmer, César Torres. Radial symmetry of ground states for a regional fractional Nonlinear Schrödinger Equation. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2395-2406. doi: 10.3934/cpaa.2014.13.2395
Zhengping Wang, Huan-Song Zhou. Radial sign-changing solution for fractional Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 499-508. doi: 10.3934/dcds.2016.36.499
Van Duong Dinh, Binhua Feng. On fractional nonlinear Schrödinger equation with combined power-type nonlinearities. Discrete & Continuous Dynamical Systems - A, 2019, 39 (8) : 4565-4612. doi: 10.3934/dcds.2019188
David Gómez-Castro, Juan Luis Vázquez. The fractional Schrödinger equation with singular potential and measure data. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7113-7139. doi: 10.3934/dcds.2019298
Hassan Emamirad, Arnaud Rougirel. Feynman path formula for the time fractional Schrödinger equation. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020246
Yinbin Deng, Yi Li, Wei Shuai. Existence of solutions for a class of p-Laplacian type equation with critical growth and potential vanishing at infinity. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 683-699. doi: 10.3934/dcds.2016.36.683
Lorenzo Brasco, Enea Parini, Marco Squassina. Stability of variational eigenvalues for the fractional $p-$Laplacian. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 1813-1845. doi: 10.3934/dcds.2016.36.1813
Hui Zhang, Jun Wang, Fubao Zhang. Semiclassical states for fractional Choquard equations with critical growth. Communications on Pure & Applied Analysis, 2019, 18 (1) : 519-538. doi: 10.3934/cpaa.2019026
PDF downloads (108)
HTML views (75)
Vincenzo Ambrosio Teresa Isernia | CommonCrawl |
Absolute surface metrology by shear rotation with position error correction
Weibo Wang1,2,
Biwei Wu1,
Pengfei Liu1,
Dong Huo1 &
Jiubin Tan1
Absolute test is one of the most important and efficient techniques to saperate the reference surface which usually limits the accuracy of test results.
For the position error correction in absolute interferometry tests based on rotational and translational shears, the estimation algorithm adopts least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy and the errors of angular orders are compensated with the help of Zernike polynomials fitting by an additional rotation measurement with a suitable selection of rotation angles.
Experimental results show that the corrected results with azimuthal errors are very close to those with no errors, compared to the results before correction.
It can be seen clearly that the testing errors caused by rotation inaccuracy and alignment errors of the measurements can be consequently eliminated from the differences in measurement results by the proposed method.
In optical interferometric testing, the test surface map is not obtained independently but only in combination with the reference surface. Several ingenious techniques have been devised to obtain absolute surface measurements, e.g., two-sphere [1, 2] method for spherical reference surfaces and "three-flat" approach for flat surface [3]. However, the classic two-sphere method with cat's-eye position measurement is sensitive to the lateral shear of the coma wavefront, which will introduce astigmatism and spherical terms [2]. For decades, the shift-rotation methods without the testing of cat's-eye position have been developed to test spherical and flat surfaces [4,5,6,7,8,9]. These approaches yield an estimate for the test surface errors without changing experimental settings, such as cavity length, that may affect the apparent reference errors. The classic multi-angle averaging method proposed by Evans and Kestner, measures the spherical surface at N angular positions equally spaced with respect to the optical axis and the resulting wavefronts are averaged, then errors in the rotated member with angular orders that are not integer multiples of the number of positions will be removed without Zernike fitting [10, 11].
It always assumes that there is no azimuthal position error during part rotation in the previous absolute test methods. However, the rotations of the test part introduce uncertainties related to azimuthal errors of the rotational angle and lateral displacement of the part with respect to the optical axis of the interferometer [11]. Moreover, rotation should be very precise when higher order spatial frequency terms are required, which are particularly sensitive to azimuthal position errors. In practice, there are challenges to rotate the test surface accurately to the desired positions, especially for large optics, and keep the environment and metrology system stable during the multi-measurements [12]. So we present a method to determine the true azimuthal positions of part rotation and consequently eliminate testing errors caused by rotation inaccuracy.
The shearing test is based on the analysis of differences in measurement results that occur when rotating or translating the test surface. The test results yield a collection of error maps. Each error map describes the sum of apparent reference errors and test surface errors for a particular position and orientation of the test surface. If the test part is rotated to N equally spaced positions about the optical axis and the resulting, we can get the averaged wavefront
$$ {T}_{ave}\left(\rho, \theta \right)=\frac{1}{N}{\displaystyle \sum_{i=0}^{N-1}{T}_i\left(\rho, \theta \right)}=\frac{1}{N}{\displaystyle \sum_{i=0}^{N-1}\left[R\left(\rho, \theta \right)+S\left(\rho, \theta \right)\right]} $$
where R(ρ, θ) is the systematic error including the reference surface, S(ρ, θ) is the surface error of the test part.
The wavefront of circular cross section can be expanded by polar coordinate polynomials in the following form
$$ W\left(\rho, \theta \right)={\displaystyle \sum_{k,l}{R}_l^k\left(\rho \right)\left({\alpha}_l^k \cos k\theta +{\alpha}_l^{-k} \sin k\theta \right)} $$
where \( {R}_l^k\left(\rho \right) \) are the radial terms of Zernike polynomials and coefficients \( {\alpha}_l^{\pm k} \) specify the magnitude of each term while the angular terms specify the angular part of the polynomial representation. ρ and θ are the normalized radial and angular coordinates.
From Eq. (2), if the wavefront is rotated to N equally spaced positions about the optical axis (φ = 2π/ N), the averaged resulting wavefront can be written as
$$ {W}_{ave}\left(\rho, \theta \right)=\frac{1}{N}{\displaystyle \sum_{j=0}^{N-1}W\left(\rho, \theta +j\frac{2\pi }{N}\right)}=\frac{1}{N}{\displaystyle \sum_{j,k,l}^{N-1}{R}_l^k\left(\rho \right)\left( \cos k\theta {\displaystyle \sum_{j=0}^{N-1}{\alpha}_l^{j\varphi, k}}+ \sin k\theta {\displaystyle \sum_{j=0}^{N-1}{\alpha}_l^{j\varphi, -k}}\right)} $$
$$ \left[\begin{array}{c}\hfill {\alpha}_l^{\varphi, k}\hfill \\ {}\hfill {\alpha}_l^{\varphi, -k}\hfill \end{array}\right]=\left[\begin{array}{cc}\hfill \cos k\varphi \hfill & \hfill \sin k\varphi \hfill \\ {}\hfill \hbox{-} \sin k\varphi \hfill & \hfill \cos k\varphi \hfill \end{array}\right]\left[\begin{array}{c}\hfill {\alpha}_l^k\hfill \\ {}\hfill {\alpha}_l^{-k}\hfill \end{array}\right] $$
For k = 0 (i.e., for rotationally symmetric terms), it is the intuitively obvious result that the procedure has no influence on rotationally symmetric terms. For k ≠ 0, the series sum to zero for all coskφ except k = cN(i = 1,2,3….) and for all sink φ. It is easy to see that rotating a wavefront to N equally spaced positions and averaging removes nonrotationally symmetric terms of all angular orders except kNθ. The term W kNθ (ρ, θ) is the Nth rotationally symmetric component (angular orders kNθ), which can be written as
$$ {W}_{kN\theta}\left(\rho, \theta \right)={\displaystyle \sum_{k,l}{\left(-1\right)}^{k\left(N+1\right)}{R}_l^{kN}\left(\rho \right)\left({\alpha}_l^{kN} \cos kN\theta +{\alpha}_l^{-kN} \sin kN\theta \right)} $$
So the averaged test wavefront can be rewritten as
$$ {T}_{ave}\left(\rho, \theta \right)=R\left(\rho, \theta \right)+{S}_{sym}\left(\rho, \theta \right)+{W}_{kN\theta}\left(\rho, \theta \right) $$
where S sym (ρ, θ) is the rotational symmetry surface deviation of the test part S(ρ, θ).
Furthermore, the asymmetric component of the test surface can be derived as
$$ {S}_{asy}\left(\rho, \theta \right)={T}_i\left(\rho, \theta \right)-{T}_{ave}\left(\rho, \theta \right)+{W}_{kN\theta}\left(\rho, \theta \right) $$
The errors of angular variation kNθ can be represented based on Zernike polynomials and additional shear rotation measurement [9]. And it may be always neglected in the multi-angle averaging method, when N is large enough.
Additional measurements provide redundancies to improve and characterize measurement uncertainties. However, the rotation of the test part also introduces uncertainties related to azimuthal errors of the rotational angle and lateral displacement of the part with respect to the optical axis of the interferometer. The effect of uncertainties will arise from uncertainties in the rotational angle. Moreover, there are challenges to rotate the test surface accurately to the desired positions, especially for large optics, and keep the environment and metrology system stable during the multi-measurements.
So the estimation algorithm should be presentd to eliminate azimuthal errors caused by rotation inaccuracy. And the unknown relative alignment of the measurements also can be estimated through the differences in measurement results at overlapping areas.
The difference W between the shear rotation measurements can be written as
$$ \begin{array}{c}W=R\left(\rho, \theta \right)+{S}_i\left(\rho, \theta \right)-R\left(\rho, \theta \right)-{S}_j\left(\rho, \theta +\varphi \right)\\ {}={S}_i\left(\rho, \theta \right)-{S}_j\left(\rho, \theta +\varphi \right)\\ {}={\displaystyle \sum_{k,l}{R}_l^k\left(\rho \right)\left(\Delta {\alpha}_l^k \cos k\theta +\Delta {\alpha}_l^{-k} \sin k\theta \right)}\end{array} $$
where \( \Delta {\alpha}_l^{\pm k} \) is the differences of the coefficients between two measurements.
It is trivially obvious to find \( {\alpha}_l^{\pm k} \) in terms of \( \Delta {\alpha}_l^{\pm k} \) from the difference of two measurements from Eqs. (4) and (8)
$$ {\alpha}_l^{\pm k}=-\frac{1}{2}\left[\Delta {\alpha}_l^{\pm k}\pm \frac{\Delta {\alpha}_l^{\mp k} \sin k\varphi }{\left(1- \cos k\varphi \right)}\right] $$
This shows that the azimuthal terms of the wavefront can be determined from the azimuthal terms of the difference between the original wavefront and itself after rotation by φ. So the wavefront can be represented based on Zernike polynomials. Futermore, the kNθ variations of surface deviation W kNθ (ρ, θ) neglected in the multi-angle averaging method can also be obtained by additional rotation testing with a suitable selection of rotation angles θ 0 with k = cN and kθ 0 ≠ 2mπ (m is an integer).
The differences of the coefficients between two measurements can be written as
$$ \Delta {\alpha}_l^{\pm k}={\alpha}_l^{\pm k}\left( \cos k{\varphi}_i-1\right)\pm {\alpha}_l^{\mp k} \sin k{\varphi}_i $$
For azimuthal position error correction, the angle φ i can be treated as additional unknowns together with the coefficients \( {\alpha}_l^{\pm k} \). Then their actual values can be determined from the measured difference wavefront by least-squares method. Then the estimation algorithm adopts least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy.
From Eq. (8), the wavefront difference can be further written as
$$ \begin{array}{c}{W}_i^k={\displaystyle \sum_{k,l}{R}_l^k\left(\rho \right)\Big\{\left( \cos k{\varphi}_i-1\right)\left({\alpha}_l^k \cos k\theta +{\alpha}_l^{-k} \sin k\theta \right)}+ \sin k{\varphi}_i\left({\alpha}_l^{-k} \cos k\theta +{\alpha}_l^k \sin k\theta \right)\Big\}\\ {}={\displaystyle \sum_{k,l}\Big\{{\gamma}_{0l}^k{\mathrm{Z}}_l^k\left(\rho, \theta \right)\left( \cos k{\varphi}_i-1\right)+{\tilde{\gamma}}_{0l}^k{\mathrm{Z}}_l^k\left(\rho, \theta \right) \sin k{\varphi}_i}\Big\}\\ {}={\displaystyle \sum_{k,l}{\xi}_{li}^k{\mathrm{Z}}_l^k\left(\rho, \theta \right)}\end{array} $$
The cost functions can be obtained by least squares method and be minimized to determine the true values of the unknowns of \( {\gamma}_{0l}^k \), \( {\tilde{\gamma}}_{0l}^k \) and φ i , as discussed in [12].
$$ \left[\begin{array}{cc}\hfill {\displaystyle \sum_{i=0}^{N-1}{\left[ \cos \left(k{\varphi}_i\right)-1\right]}^2}\hfill & \hfill {\displaystyle \sum_{i=0}^{N-1} \sin \left(k{\varphi}_i\right)\left[ \cos \left(k{\varphi}_i\right)-1\right]}\hfill \\ {}\hfill {\displaystyle \sum_{i=0}^{N-1} \sin \left(k{\varphi}_i\right)\left[ \cos \left(k{\varphi}_i\right)-1\right]}\hfill & \hfill {\displaystyle \sum_{i=0}^{N-1}{ \sin}^2\left(k{\varphi}_i\right)}\hfill \end{array}\right]\left[\begin{array}{c}\hfill {\gamma}_{0l}^k\hfill \\ {}\hfill {\tilde{\gamma}}_{0l}^k\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {\displaystyle \sum_{i=0}^{N-1}{\widehat{X}}_{li}^k\left[ \cos \left(k{\varphi}_i\right)-1\right]}\hfill \\ {}\hfill {\displaystyle \sum_{i=0}^{N-1}{\widehat{X}}_{li}^k \sin \left(k{\varphi}_i\right)}\hfill \end{array}\right] $$
$$ \left[\begin{array}{cc}\hfill {\displaystyle \sum_l^{L(k)}{\left[{\gamma}_{0l}^k\right]}^2}\hfill & \hfill {\displaystyle \sum_l^{L(k)}{\gamma}_{0l}^k{\tilde{\gamma}}_{0l}^k}\hfill \\ {}\hfill {\displaystyle \sum_l^{L(k)}{\gamma}_{0l}^k{\tilde{\gamma}}_{0l}^k}\hfill & \hfill {\displaystyle \sum_l^{L(k)}{\left[{\tilde{\gamma}}_{0l}^k\right]}^2}\hfill \end{array}\right]\left[\begin{array}{c}\hfill \cos \left(k{\varphi}_i\right)\hfill \\ {}\hfill \sin \left(k{\varphi}_i\right)\hfill \end{array}\right]=\left[\begin{array}{c}\hfill {\displaystyle \sum_l^{L(k)}\left\{{\widehat{X}}_{li}^k{\gamma}_{0l}^k+{\left[{\gamma}_{0l}^k\right]}^2\right\}}\hfill \\ {}\hfill {\displaystyle \sum_l^{L(k)}\left\{{\widehat{X}}_{li}^k{\widehat{\gamma}}_{0l}^k+{\gamma}_{0l}^k{\widehat{\gamma}}_{0l}^k\right\}}\hfill \end{array}\right] $$
This generalized algorithm adopts least-squares technique to determine the true azimuthal positions of part rotation and consequently eliminates testing errors caused by rotation inaccuracy. The true values of the unknowns of \( {\gamma}_{0l}^k \), \( {\tilde{\gamma}}_{0l}^k \) and φ i can be obtained by the iterative procedure. The total computational time is influenced by the number of terms of Zernike polynomials in consideration (maximum l and k), the number of rotation N, and the precision of the initial guess of φ i . Finally, the testing errors caused by rotation inaccuracy can be compensated by the solutions of \( {\gamma}_{0l}^k \), \( {\tilde{\gamma}}_{0l}^k \) and φ i .
For the verification of the described method, experiments are presented in a standard Fizeau interferometer. The surface under test is a spherical mirror with a clear aperture of 100 mm and surface error within λ/10PV. The accuracy of rotations can be better than 0.1° and the 5-Axis Mount of ZYGO Company can provide 13 mm X and Y adjustment, 50 mm Z adjustment and ±2° tip and tilt adjustment. The spherical surface is tested at the normal testing position and various orientations with the classic multi-angle averaging method. These approaches can yield an estimate for the test surface errors without changing experimental settings, such as cavity length, that may affect the apparent reference errors.
The averaged wavefronts for N = 6 and 12 are shown in Fig. 1. The errors of angular orders kNθ resembling a hexagon can be seen obviously from Fig. 1a, which may introduce unnecessary measurement errors when it is neglected in the absolute surface metrology. When N is large enough, the terms 2nπ/φ are close to rotationally symmetric deviations, as shown in Fig. 1b. The errors of angular orders kNθ can be quite small.
The averaged wavefront (N = 6 and 12). a N=6, PV=44.10nm, RMS=5.50nm b N=12, PV=37.90nm, RMS=4.75nm
The differences of Fig. 1a and b are shown in Fig. 2a. It can see clearly that the averaged wavefront are suffering from the errors of angular orders kNθ. For the compensation of W kNθ (ρ, θ), the additional rotation testing with a suitable selection of rotation angles is implemented. The W kNθ (ρ, θ) of the test surface are restructured and compensated with the help of least-squares fitting of Zernike polynomials. The differences of Fig. 1a and b after W kNθ (ρ, θ) compensation can be seen from Fig. 2b, and the compensated errors of angular variation kNθ are shown in Fig. 3. The differences of Fig. 1a and b after compensation are very small. The errors of angular variation kNθ have been well compensated. It implies that the described method with W kNθ (ρ, θ) compensation can obtain high accuracy even with fewer rotation measurements. However, because of position errors, the errors caused by rotation inaccuracy still can be seen from Fig. 2b.
Differences of the averaged wavefront between N = 6 and N = 12 before and after compensation. a Before compensation, PV=8.27nm, RMS=0.80nm b After compensation, PV=6.30nm, RMS=0.70nm
Compensated errors of angular variation kNθ. a N=6, PV=8.22nm, RMS=1.07nm b N=12, PV=6.30nm, RMS=0.81nm
Furthermore, the averaged wavefront for N = 6 with position errors (azimuthal errors and alignment error) introduced is shown in Fig. 4 and the difference of the averaged wavefront for N = 6 before and after position errors introduced is shown in Fig. 5. Figures 1a and 4 have a similar distribution on optical path difference and some differences on PV and RMS. More details can be seen from Fig. 5. The test results are suffering from the position errors. As mentioned above, it's difficult to rotate the test surface accurately to the desired positions, especially for large optics. There are also many challenges to keep the environment and metrology system stable during the multi-averaging measurements, especially for large N. So the position error correction is necessary.
The averaged wavefront for N = 6 with position errors
The difference of the averaged wavefront for N = 6 before and after position errors introduced
In order to correct the errors due to the rotation inaccuracy, the estimation algorithm adopts least-squares technique to determine the true azimuthal positions of part rotation and consequently eliminates testing errors caused by rotation inaccuracy. The surface is tested on the precision rotation stage with accurate position and random azimuthal errors within ±2° respectively.
The Zernike coefficients of the results in absolute surface metrology are shown in Fig. 6. The Zernike coefficients with the correction of azimuthal errors and alignment errors are also shown in Fig. 6. The corrected Zernike coefficients are very close to those with fine adjustment and no additional azimuthal errors, compared to the results before correction. The coma terms (Z7 ~ Z8, Z14 ~ Z15, Z23 ~ Z24) and spherical terms (Z9, Z16, Z25, Z36) introduced by the azimuthal errors and alignment errors have been well suppressed. It implies that the testing errors caused by rotation inaccuracy and alignment errors of the measurements can be consequently eliminated from the differences in measurement results by the proposed method.
Coefficients of Zernike polynomials (λ = 632.8 nm)
We discussed the position error estimation algorithm to determine the true azimuthal positions of part rotation and the kNθ compensation method to offer possibility to obtain high accuracy even with fewer rotation measurements. It can be used to overcome the challenges of rotating the test surface accurately to the desired positions, especially for large optics and obtain the higher order spatial frequency terms required. Experimental results have been given to verify the effectiveness of the proposed method.
Jensen, A.E.: Absolute calibration method for Twyman-Green wavefront testing interferometers. J. Opt. Soc. Am. 63, 1313A (1973)
Selberg, L.A.: Absolute testing of spherical surfaces. In: Optical Fabrication and Testing, Vol. 13 of OSA 1994 Technical Digest Series, pp. 181–184. Optical Society of America, Washington, D.C (1994)
Fritz, B.S.: Absolute calibration of an optical flat. Opt. Eng. 23, 379–383 (1984)
Freischlad, K.R.: Absolute interferometric testing based on reconstruction of rotational shear. Appl. Opt. 40(10), 1637–1648 (2001)
Bloemhof, E.E.: Absolute surface metrology by differencing spatially shifted maps from a phase-shifting interferometer. Opt. Lett. 35(14), 2346–2348 (2010)
Soons, J.A., Griesmann, U.: Absolute interferometric tests of spherical surfaces based on rotational and translational shears. Proc. SPIE 8493, 84930G (2012)
Su, D., Miao, E., Sui, Y., Yang, H.: Absolute surface figure testing by shift-rotation method using Zernike polynomials. Opt. Lett. 37, 3198–3200 (2012)
Weibo, W., Mengqian, Z., Siwen, Y., Zhigang, F., Jiubin, T.: Absolute spherical surface metrology by differencing rotation maps. Appl. Opt. 54(20), 6186–6189 (2015)
Weibo, W., Pengfei, L., Yaolong, X., Jiubin, T., Jian, L.: Error correction for rotationally asymmetric surface deviation testing based on rotational shears. Appl. Opt. 55(26), 7428–7433 (2016)
Song, W., Wu, F., Hou, X.: Method to test rotationally asymmetric surface deviation with high accuracy. Appl. Opt. 51, 5567–5572 (2012)
Evans, C.J., Kestner, R.N.: Test optics error removal. Appl. Opt. 35(7), 1015–1021 (1996)
Hyug-Gyo, R., Yun-Woo, L.: Azimuthal position error correction algorithm for absolute test of large optical surfaces. Opt. Express 14(20), 9169–9177 (2006)
National Natural Science Foundation of China (51205089, 51275121 and 51475111), China Postdoctoral science foundation (2012 M520726), National Key Scientific Instrument and Equipment Development Project (2011YQ040087), China Scholarship Council (201406125121).
All authors have participated in the method discussion and result analysis. The experiments are conducted by JT. All authors have read and agreed with the contents of the final manuscript.
Institute of Ultra-precision Optoelectronic Instrument Engineering, Harbin Institute of Technology, Harbin, 150001, China
Weibo Wang, Biwei Wu, Pengfei Liu, Dong Huo & Jiubin Tan
Department of Engineering Science, University of Oxford, Parks Road, Oxford, OX1 3PJ, UK
Weibo Wang
Biwei Wu
Pengfei Liu
Dong Huo
Jiubin Tan
Correspondence to Weibo Wang.
Wang, W., Wu, B., Liu, P. et al. Absolute surface metrology by shear rotation with position error correction. J. Eur. Opt. Soc.-Rapid Publ. 13, 2 (2017). https://doi.org/10.1186/s41476-016-0032-6
Received: 29 September 2016
Absolute test
Shear rotation
Zernike polynomials | CommonCrawl |
\begin{document}
\title{Photon exchange and entanglement formation during the transmission through a rectangular quantum barrier}
\author{Georg Sulyok$^{1}$} \author{Katharina Durstberger-Rennhofer$^{1}$} \author{Johann Summhammer $^{1}$}
\affiliation{ $^1$Institute of Atomic and Subatomic Physics, Vienna University of Technology, 1020 Vienna, Austria}
\date{\today}
\begin{abstract} When a quantum particle traverses a rectangular potential created by a quantum field both photon exchange and entanglement between particle and field take place. We present analytic results for the transition amplitudes of any possible photon exchange processes for an incoming plane wave and initial Fock, thermal and coherent field states. We show that for coherent field states the entanglement correlates the particle's position to the photon number in the field instead of the particle's energy as usual. Besides entanglement formation, remarkable differences to the classical field treatment also appear with respect to the symmetry between photon emission and absorption, resonance effects and if the field initially occupies the vacuum state. \end{abstract}
\pacs{03.65.Xp, 42.50.Ct, 03.65.Nk, 03.65.Yz }
\maketitle
\section{Introduction} \label{sec:intro} The behaviour of a quantum particle exposed to an oscillating rectangular potential has been studied by several authors under different aspects involving, for example, tunnelling time \cite{Buettiker_Landauer_traversal_time, Stovneng_Hauge}, chaotic signatures \cite{Leonel_barrier_chaos, Henseler_quantum_periodically_driven_scattering}, appearance of Fano resonances \cite{quantum_barrier_Fano_resonances_Lu}, Floquet scattering for strong fields \cite{Reichl_Floquet_strong_fields} and its absence for non-Hermitian potentials \cite{Longhi_oscillating_non-Hermitian_potential}, chiral tunnelling \cite{chiral_tunneling}, charge pumping \cite{charrge_pumping_Wu} and other photon assisted quantum transport phenomena in theory \cite{TienGordon, PAT_Platero_phys_rep, PAT_quantum_transport_Wei} and experiment \cite{PAT_Blick, PAT_Drexler, PAT_Kouwenhoven, PAT_Verghese, PAT_Wyss}.
In these works, though the potential is treated as a classical quantity, the change of the particle's energy is explicitly attributed to a photon emission or absorption process. Here, we introduce the photon concept in a formally correct way by describing the field generating the potential as quantized. Hence, we pursue the ideas which we started to elaborate in our previous publication \cite{Sulyok_Summi_Rauch_PRA}. There, we only arrived at an algebraic expression for the photon transition amplitudes whereas we now are able to present analytic results for all important initial field states enabling advanced investigations on photon exchange processes and entanglement formation.
In order to compare semiclassical and fully-quantized treatment in our physical scenario, we will at first recapitulate the results of the calculation for a classical field (chap.\ref{sec:classical}). Then, we turn to the quantized field treatment (chap.\ref{sec:quantized}). After presenting the general algebraic solution, we will explicitly evaluate the photon exchange probabilities for an incoming plane wave and for a field being initially in an arbitrary Fock state, a thermal state or a coherent state. The special cases of no initial photons (vacuum state) and of high initial photon numbers will be treated in particular.
\section{Classical treatment of the field} \label{sec:classical}
The potential created by a classical field is a real-valued function of space and time in the particle's Hamiltonian. Our considered potential oscillates harmonically in time and is spatially constant for $0\leq x < L$ and vanishes outside. \begin{equation} \hat H = \left\{ \begin{array}{ll} \frac{\hat p^2}{2m} + V \cos(\omega t+\varphi), & \textrm{if $0 \le x < L $ (region II)}\\ \frac{\hat p^2}{2m}, & \textrm{else (region I+III)} \end{array} \right. \end{equation} It therefore corresponds to a harmonically oscillating rectangular potential barrier (see fig.\ref{fig:potential2D}). \begin{figure}
\caption{Spatial characteristics of the considered potential $V$. It is harmonically oscillating in time with frequency $\omega$ in region II and vanishes elsewhere. An incoming plane wave with energy $E_0\gg V$ is split up into a coherent superposition of plane waves with energy $E_n=E_0+n\hbar \omega$.}
\label{fig:potential2D}
\end{figure}
The Schr\"odinger equation is solved in each of the three regions separately and then the wave functions are matched by continuity conditions. A general approach based on Floquet theory \cite{hideo_sambe_floquet} can be found in \cite{Li_Reichl_floquet}. We restrict ourselves to incoming waves whose energy $E_0$ is much higher than the potential ($E_0 \gg V$). Reflection at the barrier can then be neglected and standard methods for differential equations suffice to find the solution \cite{Summhammer93MultiPhoton, HaavigReifenberger}. If we assume the wave function $\ket{\psi_I}$ in region I to be a plane wave with wave vector $k_0$ we get for the wave function $\ket{\psi_{III}}$ behind the potential barrier \begin{equation} \label{eq:classical_solutions} \ket{\psi_{I}}=\ket{k_0} \, \Longrightarrow \, \ket{\psi_{III}}= \sum_{n=-\infty}^{+\infty} J_n (\beta)\ e^{-i n \eta} \, \ket{k_n} \end{equation} where \begin{eqnarray} \label{eq:abbreviations_classical1} \beta &=& 2 \frac{V}{\hbar\omega} \sin\frac{\omega \tau}{2}, \quad \eta=\varphi+\frac{\omega \tau}{2} + \frac{\pi}{2} \\ \label{eq:abbreviations_classical2} \tau &=& \frac{m L}{\hbar k_0}=\frac{L}{v_0} ,\ k_n^{2}=k_0^2+ \frac{2m}{\hbar} n \omega \end{eqnarray} For a more detailed derivation including the solution for region II as well we refer to \cite{Summhammer93MultiPhoton, sulyok_photexchg_pra}.
In summary, a plane wave $\ket{k_0}$ gets split up into a coherent superposition of plane waves $\ket{k_n}$ whose energy is given by the incident energy $E_0$ plus integer multiples of $\hbar \omega$. The transition probability for an energy exchange of $n \hbar \omega$ is just the square of the Bessel function $J_n^{\,2}$ of the $n$-th order. The argument of the Bessel function shows that an increasing amplitude $V$ of the potential also increases the probability for exchanging larger amounts of energy.
Apart from this expected result, it also exhibits a "resonance"-condition. If the "time-of-flight" $\tau$ through the field region and the oscillation frequency are tuned such that $\omega \tau = 2l\pi,\, l\in\mathbb N$, all Bessel functions $J_n$ with $n \neq 0$ vanish and no energy is transferred at all. The plane wave even passes the potential completely unaltered since $J_0(0)=1$. That's a remarkable difference between an oscillating and a static potential where at least phase factors are always attached to the wave function. An experimental implementation of the classical potential can be found in \cite{sulyok_photexchg_pra,Summhammer95MultiPhotonObservation}.
\section{Quantized treatment of the field} \label{sec:quantized}
Since the energy exchange between the harmonically oscillating potential and the particle is quantized by integer multiples of $\hbar \omega$ most authors already speak of photon exchange processes although the potential stems from a purely classical field. This notion is problematic since a formally correct introduction of the photon concept requires a quantization of the field generating the potential. For this purpose, the corresponding field equation has to be solved and a canonical quantization condition for Fourier amplitudes of the field is introduced which are then no longer complex-valued coefficients but interpreted as creation and annihilation operators.
For the further, we assume that such a quantum field whose spatial mode is well approximated by the rectangular form generates the potential. The quantum system we observe now consists of particle and field together. The total state $\ket{\Psi}$ of the composite quantum system is an element of the product Hilbert space $\mathcal H_{\rm total}=\mathcal H_{\rm particle}\otimes\mathcal H_{\rm field}$. If the particle is outside the field region the evolution of the state is given by $\hat{H}_0$ composed of the free single-system Hamiltonians $\hat{h}^{\rm p}_0$ and $\hat{h}^{\rm f}_0$ of particle and field \begin{eqnarray} \label{eq:H0} \hat{H}_0 &=& \hat{h}^{\rm p}_0 \otimes \1 + \1 \otimes \hat{h}^{\rm f}_0 \\ \label{eq:free_single_hamiltonians} \hat{h}^{\rm p}_0 &=& \frac{\hat p^2}{2m}, \quad \hat{h}^{\rm f}_0 = \hbar \omega \textstyle \left(\hat a^{\dagger} \hat a + \frac{1}{2}\right) \end{eqnarray}
Interaction between field and particle takes place if the particle is inside the field region, that is, its position coordinate fulfils $0\leq x_{\rm particle}<L$. Then, the evolution of the composite state $\ket{\Psi}$ is governed by the full Hamiltonian $\hat H = \hat{H}_0 + \hat{H}_{\rm int}$. Basically, the interaction Hamiltonian $\hat{H}_{\rm int}$ is given by the quantized version of the sinusoidal driving term \begin{equation} \label{eq:H_int} \hat{H}_{\rm int} = \lambda \, \1 \otimes \left(\hat a^{\dagger} + \hat a\right) \end{equation} where all constants and the eigenvalue of operator acting on the particle (e.g. spin, charge) have already been absorbed in the coupling parameter $\lambda$. The explicit form of $\hat{H}_{\rm int}$ depends on the actual physical context, for example, dipol interaction for a charged particle in an electromagnetic field or Zeeman-Hamiltonian for uncharged particles in a magnetic field \cite{atom_photon_interactions}. Mind that, although $\hat H_{\rm int}$ in the form of (eq.\ref{eq:H_int}) seems to act solely on the field part of the composite state, the sheer presence of an interaction is connected to the particle's position. Therefore, we again distinguish between three different states $\ket{\Psi_{I}}$, $\ket{\Psi_{II}}$, and $\ket{\Psi_{III}}$ for the composite quantum system (see fig.\ref{fig:quantized_scheme}). \begin{figure}\label{fig:quantized_scheme}
\end{figure}
\subsection{Fock states} \label{sec:fock}
As in the classical field case, we assume that the kinetic energy of the incoming particle is sufficiently high so that reflection at field entry can be neglected. Then, we can choose as ansatz for $\ket{\Psi_I}$ the particle's state to be a single plane wave with wave vector $k_0$ and the field to be present in a distinct Fock state $n_0$ \begin{equation} \label{eq:Psi_I} \ket{\Psi_I}=\ket{k_0} \otimes \ket{n_0} \end{equation} In order to get $\ket{\Psi_{II}}$, we switch to the position space representation of the particle's part of the wave function and match $\ket{\Psi_I}$ at $x_{\rm particle}\equiv x=0$ for all times $t$ with the general solution of the full Hamiltonian $\hat{H}_0 + \hat{H}_{\rm int}$. It is given by an arbitrary linear superposition of plane waves for the particle and displaced Fock states for the field \cite{Sulyok_Summi_Rauch_PRA}. The continuity conditions uniquely determine the expansion coefficients and yet $\ket{\Psi_{II}}$. At $x=L$, $\ket{\Psi_{II}}$ has to be matched with the general solution of the free Hamiltonian which is given by an arbitrary superposition of plane waves and Fock states. The state $\ket{\Psi_{III}}$ behind the field region then reads \begin{equation} \label{eq:final_Fock} \ket{\Psi_{III}} = \sum_{n=0}^{\infty} t_{n_0 n}\ket{k_{n_0 - n}}\otimes\ket{n}, \qquad k_l^{2}=k_0^2+ \frac{2m}{\hbar} l \omega \end{equation} with \begin{equation} \label{eq:transcoef_Fock_algebraic} t_{n_0 n}=e^{i \bar\lambda^2 \omega \tau} \sum_{q=0}^{\infty} \bra n \hat D^{\dagger}(\bar \lambda) \ket q \bra q \hat D(\bar\lambda) \ket{n_0} e^{-i (q-n)\omega \tau} \end{equation} where $\hat D$ denotes the displacement operator, $\bar \lambda = \lambda/\hbar\om$ the coupling constant in units of the photon energy, and $\tau=m L/\hbar k_0$ the "time of flight" through the field region as in the classical case (eq.\ref{eq:abbreviations_classical2}). Details of the calculation as well as the explicit result for $\ket{\Psi_{II}}$ can be found in \cite{Sulyok_Summi_Rauch_PRA}. The matrix $t_{n_0 n}$ gives the amplitudes for the transition from an initial photon number $n_0$ to the final photon number $n$. The wave vector of the traversing particle changes accordingly from $k_0$ to $k_{n_0-n}$. Every emission of field quanta is absorbed in the kinetic energy of the particle and vice versa. The final state is the coherent superposition of all such combinations $\ket{k_{n_0-n}}$ and $\ket n$ and therefore highly entangled.
The algebraic form of the transition matrix $t_{n_0n}$ already allows for an intuitive interpretation of the physical processes happening during the transmission. When the particle enters the field the initial Fock state $\ket{n_0}$ experiences a displacement whose amount depends on the coupling constant $\lambda$ (in units of the photon energy). Transitions to other, intermediate Fock states $\ket q$ then occur. When the particle leaves the field a "back-displacement" of the intermediate state takes place. The overlap with the final Fock state $\ket{n}$ at field exit weighted with a phase factor reflecting the energy difference between the intermediate and the final Fock state gives the probability for the transition $n_0 \rightarrow q \rightarrow n$. All intermediate transitions contribute coherently to the transition amplitude $t_{n_0 n}$.
Summation over all final Fock states $\ket n$ has to be performed in order to receive the total final state $\ket{\Psi_{III}}$. $\ket{\Psi_{III}}$ additionally obtains an overall phase factor from a constant energy shift in region II arising from completing the square in the full Hamiltonian $\hat H$.
The algebraic form of the transition matrix $t_{n_0n}$ (eq.(\ref{eq:transcoef_Fock_algebraic})) can be further developed in order to get an analytic expression. The calculation is straightforward, but rather lengthy and requires the nontrivial Kummer transformation formula for confluent hypergeometric functions. Finally we arrive at \begin{equation} \label{eq:transcoef_Fock_analytic} t_{n_0 n} = e^{i \Phi} \textstyle \sqrt{\frac{n_0!}{n!}} \,e^{-\frac{\Lambda^2}{2}} \, \Lambda^{n-n_0} \,\mathcal L_{n_0}^{n-n_0}(\Lambda^2) \end{equation} where $\mathcal L_n^{\alpha}(x)$ denotes the generalized Laguerre polynomial and \begin{eqnarray} \label{eq:abreviations_quantum} \textstyle \Phi&=& \bar\lambda^2 \left(\omega\tau -\sin \omega \tau\right)
+ (n-n_0) \big(\frac{\omega \tau}{2}- \frac{\pi}{2} \big) \\ \label{eq:coupling_strength} \Lambda&=&2 \bar\lambda \sin\frac{\omega\tau}{2} . \end{eqnarray}
The coupling strength parameter $\Lambda$ indicates the capacity of the particle-field system to exchange energy and contains the coupling constant $\lambda$ (in units of $\hbar \om$) and the sinusoidal resonance factor that already occurred the classical treatment. The probability that the initial photon number $n_0$ changes to the final photon number $n$ after the transmission of the particle through the field is given by $P_{n_0,n}=|t_{n_0 n}|^2$. \begin{eqnarray}
\label{eq:probaility_fock_states} \textstyle P_{n_0, n}=\frac{n_0!}{n!} \, e^{-\Lambda^2} \, (\Lambda^2)^{n-n_0} \, \big(\mathcal L_{n_0}^{n-n_0}(\Lambda^2)\big)^2 \end{eqnarray} In fig.\ref{fig:fock_x0}, the transition probabilities $P_{n_0, n}$ for various coupling strengths $\Lambda$ are depicted. \begin{figure}\label{fig:fock_x0}
\end{figure} As in the classical case, the probability for exchanging higher number of photons increases with increasing coupling strength, but absorption and emission of the same number of photons are not equally probable. We have in general $P_{n_0, n}=P_{n, n_0}$ but $P_{n_0, n_0+q}\neq P_{n_0, n_0-q}$. This asymmetry is reflected in the expectation values of the energy of particle and field after the interaction process. \begin{eqnarray} \label{eq:final_energy_particle} \bra{\Psi_{III}} \hat h_0^{\rm p} \otimes \1 \ket{\Psi_{III}} &=& \frac{\hbar^2 k_0^2}{2m}-\hbar \omega \Lambda^2 \\ \label{eq:final_energy_field} \bra{\Psi_{III}} \1 \otimes h_0^{\rm f} \ket{\Psi_{III}} &=& \textstyle \hbar \omega \left(n_0+\Lambda^2+ \frac{1}{2} \right) \end{eqnarray} Since we assumed a high energetic incoming particle for which reflection could be neglected the net energy transfer goes from particle to field. Not until the initial photon number becomes large with respect to the normed coupling constant $n_0 \gg \bar\lambda$ the symmetry between emission and absorption is restored. We can then use from the appendix of \cite{PolonskiCohenTan} \begin{equation} \label{eq:D_fock_large_n0} \bra{n_0+l}\hat D(\bar\lambda)\ket{n_0+r} = J_{l-r}(2\bar\lambda \sqrt{n_0}) ,\quad n_0 \gg \bar\lambda \end{equation} and apply Graf's addition theorem for Bessel functions in (eq.\ref{eq:transcoef_Fock_algebraic}) to get \begin{equation} \label{eq:prob_fock_large_n0} P_{n_0, n_0+q}=J_q(2 \Lambda \sqrt{n_0} )^2=P_{n_0, n_0-q} \end{equation} Large initial photon numbers indicate the transition to the classical field regime, and indeed, the Bessel function in (eq.\ref{eq:prob_fock_large_n0}) is reminiscent of the classical result (eq.\ref{eq:classical_solutions}). But, if we trace over the field state the particle is still present in an incoherent superposition of the $\ket{k_n}$ weighted with the $J_n^2$ as to be expected from the entangled total state $\ket{\Psi_{III}}$. A proper transition from the quantum to the classical case can only be achieved by starting with a coherent field state (see sec.\ref{sec:coherent}).
If the length $L$ of the field region and the wave vector $k_0$ are tuned such that the "resonance" condition $\omega\tau= 2\pi n, n\in \mathbb N$ is fulfilled no energy between particle and field is transferred as in the classical case. But, contrary to the classical treatment, an overall phase factor remains in form of $\ket{\Psi_{III}}=e^{i \bar\lambda^2 \omega\tau} \ket{k_0}\otimes\ket{n_0}$ and could be accessible in an interferometric setup.
\subsection{Vacuum state} \label{sec:vacuum} Another remarkable feature of the quantum field treatment can be revealed from the investigation of the vacuum state. For a classical field, vacuum is realised by simply setting the potential to zero resulting in an unaltered, free evolution of the plane wave ($\ket{\psi_I}=\ket{\psi_{III}}=\ket{k_0}$). In the quantized treatment, the vacuum is represented by an initial Fock state $\ket{n_0=0}$ which still interacts with the particle and yields as final state $\ket{\Psi_{III}}$ behind the field region \begin{equation} \label{eq:state_vacuum} \ket{\Psi_{I}}=\ket{k_{0}}\otimes\ket{0} \quad \Rightarrow \quad \ket{\Psi_{III}} = \sum_{n=0}^{\infty} t_{0 n}\ket{k_{-n}}\otimes\ket{n} \end{equation} with a photon exchange probability \begin{equation}
P_{0,n}= |t_{0n}|^2= \frac{1}{n!}\, e^{-\Lambda^2}\,\Lambda^{2n}. \end{equation} The particle thus transfers energy to the vacuum field leading to a Poissonian distributed final photon number. Let's consider, for example, a superconducting resonant circuit as source of the field. The magnetic field along the axis of a properly shaped coil is well approximated by the rectangular form. A particle with a magnetic dipole moment passing through the coil then interacts with the circuit and excites it with a measurable loss of kinetic energy even if there is classically no field it can couple to. The phenomenon that vacuum in quantum field theory does not mean to "no influence" as known from Casimir forces or Lamb shift is clearly visible here as well.
\subsection{Thermal state} \label{sec:thermal} In realistic experimental situations, the pure vacuum state can not be achieved. Due to unavoidable coupling to the environment acting as heat bath with a finite temperature $T$ higher photon numbers are excited as well and we encounter the incoherent, so-called thermal state $\rho_{\rm thermal}$ for the field \begin{equation} \label{eq:thermal_state} \rho_{\rm thermal}= \sum_{n=0}^{\infty} y^n (1-y) \ket{n}\bra{n}, \qquad y=e^{-\frac{\hbar\om}{k_B T}} \end{equation} We now choose the field to be initially in such a thermal state. After the particle has traversed the field region, the probability $P_n^{\rm therm}$ of finding the field in a distinct Fock state $\ket n$ is given by \begin{equation} \label{eq:prob_thermal_state} \textstyle P_n^{\rm therm} = e^{-\Lambda^2(1-y)} \ (1-y)\ y^n L_n\big(-\frac{\Lambda^2(1-y)^2}{y}\big) \end{equation} where $L_n$ denotes the ordinary Laguerre polynomial. As depicted in fig.\ref{fig:thermal_figure}, the initial thermal distribution changes when the coupling strength $\Lambda$ reaches the order of $k_B T/\hbar\om$.
\begin{figure}
\caption{Probability distribution of the final photon number for different coupling strengths $\Lambda = 2\frac{\lambda}{\hbar \omega}\sin\frac{\omega \tau}{2}$ if the field was initially in a thermal state (temperature $T, k_B T/\hbar\omega\approx 10$). }
\label{fig:thermal_figure}
\end{figure}
\subsection{Coherent state} \label{sec:coherent}
Now, we consider the field to be initially in a coherent state $\ket{\alpha}$ labelled by the complex number $\alpha=|\alpha| e^{i\varphi_{\alpha}}$ \begin{equation} \label{eq:coherent_state_def} \ket{\Psi_I}=\ket{k_0}\otimes\ket{\alpha}, \qquad
\ket{\alpha}=e^{-\frac{|\alpha|^2}{2}}\sum_{n=0}^{\infty}\frac{\alpha^n}{ \sqrt{n!}}\ket n .
\end{equation} For the further evaluation of this expression we start from the algebraic form of the transition matrix (eq.\ref{eq:transcoef_Fock_algebraic}) and work in the position representation of the particle's part of wave function. Expansion of the wave vectors $k_n$ (eq.\ref{eq:final_Fock}) around the initial wave vector $k_0$ enables us to absorb phase factors in the coherent state and evaluate the displacements. The projection onto the position eigenstate $\ket x \in \mathcal H_{\rm particle}$ after the transmission reads \begin{eqnarray} \nonumber
\braket{x|\Psi_{III}} &=& e^{i\bar{\lambda}^2 \omega\tau}
e^{-i\bar{\lambda}^2 \sin \om\tau}
e^{i k_{0} x} \\ && \label{eq:psi3_coherent_endres}
e^{ i \Lambda |\alpha| \sin(\varphi_{\Lambda}(x)-\varphi_{\alpha})} \ket{\alpha + \Lambda e^{i\varphi_{\Lambda}(x)}} \end{eqnarray} where \begin{eqnarray} \label{eq:psi3_coherent_endres_abbrev} \varphi_{\Lambda}(x) =\frac{\omega\tau}{2} - \frac{\om}{v_0} x - \frac{\pi}{2} \end{eqnarray} The entanglement between particle and field is now indicated by the explicit occurrence of the particle's position coordinate $x$ in the final (coherent) field state. If the particle is detected at a certain position $x_1$ the field state is projected onto $\ket{\alpha + \Lambda e^{i\varphi_{\Lambda}(x_1)}}$. We can now place two detectors at positions $x^+$ and $x^-$ which satisfy \begin{eqnarray} \label{eq:phi_plus} \varphi_{\Lambda}(x^+) &\equiv& \varphi_{\Lambda}^+=\varphi_{\alpha} +2 n \pi \\ \label{eq:phi_minus} \varphi_{\Lambda}(x^-) &\equiv& \varphi_{\Lambda}^-=\varphi_{\alpha} +2 (m-1) \pi \end{eqnarray}
where $n$ and $m$ are arbitrary integers and take a look at the photon number distributions of the related coherent states. The phases $\varphi_{\Lambda}$ are chosen such that the average photon numbers are given by $||\alpha| + \Lambda|^2$ for $x^+$ and $||\alpha| - \Lambda|^2$ for $x^-$ respectively. For a sufficiently high coupling strength $\Lambda \gtrsim \frac{1}{2}$ the corresponding distributions cease to overlap. Detecting the particle around $x^-$ thus increases the probability of having roughly $||\alpha| - \Lambda|^2$ photons in the field whereas detection around $x^+$ is connected to an average photon number of $||\alpha| + \Lambda|^2$. Likewise, finding $||\alpha| + \Lambda|^2$ photons in the field determines the particle's position to be around $x^+$ and analogously for $x^-$ (see fig.\ref{fig:poissonians}). The photon number thus contains information about the particle's position. \begin{figure}\label{fig:poissonians}
\end{figure}
If no measurement on the particle is carried out the field state is obtained from the total density matrix $\rho =\ket{\Psi_{III}}\bra{\Psi_{III}}$ by performing the partial trace over the particle's degrees of freedom. We get an incoherent mixture of coherent states for the field's density matrix \begin{equation} \rho_{\rm field} =\int dx \ket{\alpha + \Lambda e^{i\varphi_{\Lambda}(x)}} \bra{\alpha + \Lambda e^{i\varphi_{\Lambda}(x)}} \end{equation} which can be illustrated in the Fresnel plane (see fig. \ref{fig:coherent}) \begin{figure}\label{fig:coherent}
\end{figure}
Like in case of Fock states, on average, the particle transfers energy to the field as indicated by the expectation values \begin{eqnarray} \label{eq:final_energy_particle_coherent} \bra{\Psi_{III}} \hat h_0^{\rm p} \otimes \1 \ket{\Psi_{III}} &=& \frac{\hbar^2 k_0}{2m}-\hbar \omega \Lambda^2 \\ \label{eq:final_energy_field_coherent} \bra{\Psi_{III}} \1 \otimes \hat h_0^{\rm p} \ket{\Psi_{III}} &=& \textstyle
\hbar \omega \left(|\alpha|^2 + \Lambda^2 + \frac{1}{2} \right) \end{eqnarray}
If we increase the mean photon number such that we can neglect the coupling strength $\Lambda$ against $|\alpha|$ we can simplify (eq.\ref{eq:psi3_coherent_endres}) and arrive at \begin{equation}
\ket{\Psi_{III}}= e^{i\bar{\lambda}^2 \omega \tau} e^{-i\bar{\lambda}^2 \sin\omega \tau}\sum_{n=-\infty}^{+\infty} J_n(\Lambda |\alpha|) e^{-i n \eta} \ket{k_n}\otimes \ket{\alpha}\label{eq:psi3_coherent_high_phot_endres} \end{equation} where we have use the abbreviation $\eta$ of the classical section (eq.\ref{eq:abbreviations_classical1}) with $\varphi_{\alpha}\ \widehat{=}-\varphi$.
Disregarding the back action of the particle on the field thus leads to a simple product state of the composite quantum system and therefore to disentanglement. By tracing over the field, we obtain the particle's state which is now a coherent superposition of $\ket{k_n}$ weighted with the Bessel functions $J_n$ and a phase factor $e^{-in\eta}$ as in the classical case. A general survey on the correspondence between time-independent Schr\"odinger equations for the composite particle-field system and time-dependent Schr\"odinger equations for the particle alone that contain the expression for the classical field as potential term can be found in \cite{Braun_Briggs_Classical_limit}.
If we choose the initial coherent state $\ket\alpha$ to be the vacuum state $\ket 0$ and therefore set $\alpha=0$ in (eq.\ref{eq:psi3_coherent_endres}) we consistently end up with the same final state as in (eq.\ref{eq:state_vacuum}).
At resonance ($\omega\tau= 2\pi n, n\in \mathbb N$), no photon exchange takes place and the initial state again only obtains an overall phase factor and becomes $\ket{\Psi_{III}}=e^{i \bar\lambda^2 \omega\tau} \ket{k_0}\otimes\ket{\alpha}$ after the interaction.
\section{Conclusion} \label{sec:conclusion} The quantum mechanical scattering on a rectangular potential created by a quantum field is completely analytically solvable for incoming particles whose energy is high enough to neglect reflections. Transition amplitudes and photon exchange probabilities can be entirely expressed in terms of standard functions for the most important types of initial field states, that is, Fock, thermal, and coherent states. The quantized treatment of both particle and field reveals their entanglement in the interaction process. Therefore, the setup could be of interest for quantum information experiments where a spatially fixed (field) and a moveable component (particle) are required. For Fock states, entanglement actually occurs between the energy eigenstates of the particle and the photon number states of the field, but, for a coherent initial field state, the particle's position and the photon number get entangled.
The Schr\"odinger equation of the composite system is time-independent and thus, the total energy is conserved in the transmission process. Though, photon emission and absorption are generally not equally probable, on average, the high-energetic, incoming particle transfers energy to the field. Only if the photon number in the field becomes large, the symmetry between emission and absorption is restored. However, in case of pure Fock states, entanglement is nevertheless maintained and the energy transfer happens incoherently. Just for coherent field states whose mean photon number is high against the coupling strength so that the influence of the particle on the field can be neglected the transition to the classical, coherent energy exchange becomes visible.
A remarkable feature of the fully quantized treatment is the interaction with the vacuum. Though from the classical point of view a free evolution of the particle should take place, the particle transfers energy to the field and their combined state changes.
For the experimentally more realistic situation of not a pure vacuum but a thermal field state visible effects occur once the coupling constant becomes comparable to the thermal energy ($k_B T$) of the environmental heat bath.
At resonance, that is when the length of the field region and the particle's wavelength are related such that destructive interference suppresses any photon exchange, the wave function nevertheless changes and obtains an overall phase factor. In the quantized treatment, a completely unaltered evolution only happens in the trivial case of a vanishing coupling constant.
\end{document} | arXiv |
The use of Local Ecological Knowledge as a complementary approach to understand the temporal and spatial patterns of fishery resources distribution
Mauro Sergio Pinheiro LIMA1,
Jorge Eduardo LINS OLIVEIRA1,
Marcelo Francisco de NÓBREGA1 &
Priscila Fabiana Macedo LOPES2
Acquiring fast and accurate information on ecological patterns of fishery resources is a basic first step for their management. However, some countries may lack the technical and/or the financial means to undergo traditional scientific samplings to get such information; therefore affordable and reliable alternatives need to be sought.
We compared two different approaches to identify occurrence patterns and catch for three main fish species caught with bottom-set gillnets used by artisanal fishers from northeast Brazil: (1) scientific on-board record data of small-scale fleet (n = 72 trips), and (2) interviews with small-scale fishers on Local Ecological Knowledge (LEK) (n = 32 interviews). We correlated (Pearson correlations) the months cited by fishers (LEK) as belonging to the rainy or to the dry season with observed periods of higher and lower precipitation (SK). The presence of the three main fish species at different depths was compared between LEK and SK by Spearman correlations. Spearman correlations were also used to compare the depths of greatest abundance (with the highest Capture per Unit Effort - CPUE) of these species; the CPUEs were descendly ordered.
Both methods provided similar and complementary bathymetric patterns of species occurrence and catch. The largest catches occured in deeper areas, which also happened to be less intensively fished. The preference for fishing in shallower and less productive areas was mostly due to environmental factors, such as weaker currents and less drifting algae at such depths.
Both on-board and interview methods were accurate and brought complementary information, even though fishers provided faster data when compared to scientific on-board observations. When time and funding are not limited, integrative approaches such as the one presented here are likely the best option to obtain information, otherwise fishers' LEK could be a better choice for when a compromise between speed, reliability and cost needs to be reached.
The efficient management of fishery resources minimally involves the understanding of seasonality and species distribution [1, 2]. These factors are crucial to the dynamics of small and large-scale fishing, as they are known to influence the life cycle, abundance, biomass and species richness [3,4,5], and for determining spawning [6] and food aggregations [6, 7].
Density dependent and independent factors limit the spatial distribution of populations and species, therefore influencing individual survival and reproduction, and population abundance [8]. Understanding how fishing and environmental variability interact to produce an effect on exploited populations (target species) has been an evolving and intriguing question in fisheries science for decades [9], being a limiting factor to the appropriate management of fishing stocks. Adding to that, there is also the question that fisheries have to be evaluated in the context of a changing environment [10,11,12], which is better done under an ecosystem approach [13,14,15]. However, an ecosystem approach requires the integration of the spatial dynamics and the seasonal variability of the various components of the fishery, which include not only the natural resources, but also the fishers [10]. Fishers retain specific knowledge of fishing resources that could be crucial to support an ecosystem approach to fisheries (EAF) in developing tropical countries, because these are places where there are often poor or no data on the status of fish stocks at local or regional scales [16, 17].
In EAF, one of the first decisions to be made regards the need to establish fishing boundaries, which can be done through spatial jurisdictions, regional fisheries institutions or natural physical and technological (limits imposed by the fleet autonomy, for instance) boundaries. The scale of a fishery system can vary greatly and ecosystems are not always clearly defined entities with unambiguous boundaries. Human dimensions, with identification and involvement of stakeholders, are central to EAF. Understanding their values, needs, aspirations, and current livelihood circumstances is key to informing policy and influencing management decisions [18]. It is also important to identify the scale of the fishery, as industrial and small-scale fisheries are completely different realms, with distinct fish targets, dynamics (intensity, gear, depth, and season) and markets.
For the small scale fisheries, the understanding of the environmental factors that affect and determine their catches over temporal and spatial scale is usually limited and often inferred without sound scientific information or technological geolocation support [19]. Their industrial counterparts rely on scientific and technological advances that identify seasonal and bathymetric patterns of fishery resources, allowing them to optimize efforts to maximize catches from the continental shelf to abyssal zones [20].
Small-scale fisheries are not restricted to the tropics or the developed countries, but these are the regions where most of these fisheries exist [21]. The tropical seasonality is defined by marked rainy and dry periods that influence the life cycle, and therefore, the spatial-temporal occurrence of many commercial species [22, 23]. Small-scale fisheries have their traditional grounds, in accordance to the specific periods of aggregations and migrations of fish stocks. For example, in the Brazilian northeast, summers (dry season) are the period to catch groupers and snappers at the continental shelf edge [24], whereas the serra Spanish mackerel is caught in the rainy months all over the northern coast of South America [25].
Such need to perceive the environment through its natural clues is likely to explain why fishers have an understanding that is not limited to the recognition of the spatiotemporal patterns of their target species [1]. Fishers have enough accumulated knowledge to make them sensitive to some environmental changes, with the ability to interpret them, and provide production estimates [26]. Such fishers' ecological knowledge (LEK) is especially important in areas with scarce information on fishing statistics, and it can be sometimes the only information available to build up fisheries management strategies [17].
Traditionally fisheries management recommendations tended to demand complex research models and large amounts of statistical data based only on conventional scientific information, which, despite all the time, funding and expertise involved, may also provide controversial estimates [27,28,29]. Such limitations, exacerbated in countries where science and statistics are not a priority, brought to the forefront the need to look beyond the scientific paradigm, and learn how to access information that is affordable and quickly available [30, 31].
Fishers' LEK has been proposed as such a solution to restore past yield data not obtained by government or researches [32,33,34], although the information gathered through LEK has not been readily accepted [35]. Today, there is a growing recognition that fishers' LEK could fill up gaps in biological, ecological and management knowledge [30, 34, 36, 37], as long as there is some caution in interpreting its quality and accuracy to science [32]. Fishers may provide accurate information on fish diet, for instance [38], which does not exempt them from misinterpreting such information. For example, for observing that a given fish has once in a while been caught with lobster in its stomach, fisher may seem such fish as a competitor capable of affecting their lobster fishing [39]. Therefore, it is important to identify the type of biological information that fishers can reliabily provide, which will depend on social, economic and local ecological factors. Second, it is also relevant to interpret such information through a scientific filter.
Finally, the large extension of marine ecosystems makes it difficult, financially and technically, to gather detailed scientific information about them along the time [17, 40]. On the other hand, fishers' LEK can cover large coastal and offshore areas and can also track changes over large temporal scales, potentially minimizing costs and improving the management success [17, 31, 41]. Considering such potentiality, some studies have suggested the need of adopting fisheries co-management systems that integrate fishers and their knowledge to the scientific knowledge and to the political management process usually done in partnership with goverments [42, 43]. The success of such initiatives depend on multiple factors, one of them being the inclusion of open-minded scientists capable of valuing fishers' LEK to establish management goals and enforcement mechanisms [17], in a learning by doing process [44].
In this study, we assessed fishers' LEK and a more conventional fishery approach, here described as scientific knowledge (SK), to know if LEK could yield reliable and accurate information regarding catch patterns and regarding the environmental influence on fishing decisions. More specifically, we hypothetised that fishers choose their fishing spots not only based on the spot productivity, but also on the perception of environmental factors that limit fishing. We specifically assessed the seasonal and bathymetric patterns of species caught by small-scale fishers through two different approaches: (1) on-board scientific monitoring of small-scale fisheries, and (2) structured interviews with fishers about their local ecological knowledge of fishing regarding the occurrence and catch biomass of target species. Such methods were assessed for their differences and complementarities, as an evaluation of the possibility of using LEK to provide spatial-temporal data on catch, whenever more traditional scientific approaches are not an option.
Six municipalities of the eastern coast of Rio Grande do Norte, a coastal state on the Brazilian northeast, were sampled (Fig. 1). Such places were chosen because they cover an important region where bottom-set gillnets are used by commercial small-scale fisheries. Such places responded for an average of 78.4% of the catch of the entire state, between 2011 and 2014, with a minimum of 76.8% in 2014 and a maximum of 81.9% in 2011 (unpublished data provided by the "Sea around Us" reconstruction effort).
The study region is marked by a rainy season from February to July, and a dry season from September to January. The southeast winds are more frequent and stronger between April and July, and are usually accompanied by rain [45]. During rainfall periods, the ocean currents and southeast winds produce 1.5 m average height waves that influence the entire continental shelf [46, 47].
Throughout the year, the small-scale fishing fleet performs round trips with boats powered mostly by sail and by small one-cylinder motors. Most do not have supporting equipment to fishing and navigation and the fishing trips last between one and four days [48]. In fact, most of the trips are completed within the same day. Even when longer, these trips are not usually done to places further away from the coast. The average distance for fishing trips in the region is 10 km off the coast.
The bottom-set gillnets used in the study region generally have two different settings. The first one usually has smaller mesh size, which is referred by fishers as a "fine mesh", varying from 80 to 100 mm (opposite knot) with a nylon thickness varying from 0.40 to 0.60 mm. The net total length varies between 500 and 2,500 m and has an average height of 1.6 m. This first setting is used throughout the year and is cast mainly close to reefs and over the biodetrical substrate (rhodolith of calcareous algae and carbonate debris from marine organisms), remaining in the water for around 3.5 h. This setting is used anywhere on the continental shelf. The second kind of bottom-set gillnet has a larger mesh, called "thick net", and is used mostly during the spring and summer. Its mesh size ranges between 120 and 320 mm and the length between 558 and 2,100 m. Such nets are set mainly on muddy shallow substrates (5–20 m), remaining submerged for about 5 h.
The data collection was divided in two phases. The first phase consisted in obtaining information on Scientific Knowledge (SK) through on-board observations of the fleet that used bottom-set gillnets between June 2012 and June 2014. The second phase consisted in registering Fishers' Knowledge (LEK) by using semi-structured interviews applied only to fishing masters and active expert fishers that were using bottom-set gillnets during the study period. The second phase was refined after collecting SK data and was done between August 2014 and January 2015. The data collection could not be simultaneous because the on-board observations were used to identify difficulties and solutions implemented by fishers when at sea, during certain periods of the year or in specific geographical areas. Such observations drove the development of the second phase (LEK).
For the SK phase, only data coming from small motorized boats (8 a 9 m) that used "fine" mesh size (80–100 mm opposite knot) were considered. The fleet with such characteristics responds for most of the fishing effort in the area. A total of 72 net settings were registered, under the criteria defined above.
The number of respondents for the LEK phase was calculated beforehand, by visiting each municipality to estimate the number of boats and of active fishers. In each place several fishers were asked the number of vessels using bottom-set gillnet and the average crew size per boat. The fishers' responses were averaged out to estimate the total number of fishers working in bottom-set gillnet boats. That resulted in an estimate of 28 boats and 56 fishers. Of those, 32 skilled fishers (according to their peers) were interviewed, which were all fishing in boats between 8 and 9 m long and using fine meshes. Most (79% of 32) interviewees were fishing masters; of these, 13 had also been followed previously during the on-board data collection phase. Such preference for fishing masters was due to their accumulated knowledge. These are usually the most experienced fishers in the crew and the ones making decisions regarding where to go and how long to stay on each spot. Overall, the respondents were experienced fishers (average = 30 years as a fisher, 22 years using bottom-set gillnet), even though they were relativey young (most were between 35 and 58 years old). These fishers tended to have a low level of education (12% were illiterate and 88% did not complete primary school) and low income (62% lived on less than the minimum wage).
We only interviewed fishers after carefully explaining the goal of our research personally and individually and after they gave their oral consent to take part in the study. We explained that they had the right to refuse participation, but no fisher we approached refused to join the study.
Scientific knowledge (SK) – On-board data
Every time a gillnet was set, three sets of variables were recorded: (1) geographical, which included geographic location and the average depth of the setting site; (2) biological, including species identification and total catch (kg) per species; and (3) fishing related variables, including net length and height, mesh size and time the net spent in the water (soaking time).
The fishing related variables were used to calculate the fishing effort (F):
$$ \mathbf{F} = \mathbf{A}\ *\ \mathbf{T} $$
Where "A" represents the gillnet area in square meters (height * total length); and "T" is the soaking time (in hours). "F" is presented as m2 *h.
The fishing effort was then used to calculate the CPUE (Capture per Unit Effort), which is an indirect measure of local fish abundance. The CPUE was defined as the ratio between catch and effort and calculated for both methods (LEK and SK), with the same fishing effort parameters:
$$ \mathrm{C}\mathrm{PUE} = \mathrm{C}/\mathrm{F} $$
Where C is the catch in weight per sample (fish caught per cast of a bottom-set gillnet) and F is the fishing effort (eq. 1). The CPUE is then presented as g/m2*h. The CPUE was calculated for the different depths and different species considered in this study.
Interviews with fishers - LEK
During the on-board observation, it was noted that: 1) a few species, namely blue runner [Caranx crysos (Mitchill, 1815)], serra Spanish mackerel (Scomberomorus brasiliensis Collette, Russo & Zavala-Camin, 1978), and lane snapper [Lutjanus synagris (Linnaeus, 1758)], comprised most of the catch; and that 2) fishing seemed limited by local environmental conditions, such as the current strength and the presence of drifting algae. Such preliminary observations of possible environmental factors limiting the fishing operation at different depths and periods of the year drove the design of the LEK interview, which also focused on the three main target species.
The interviews had quantitative and qualitative questions regarding fishers' perception about the fishing operation, gillnet measures, how they define what is deep and shallow fishing areas (in meters), and soaking time (in hours). Such information was used to calculate fishing effort and CPUE, using the same definition presented before (Eqs. 1 and 2). The interviews also approached the depth of fishing, average weight of the catch and the havesting period for each of the three species (Additional file 1). Similarly to the SK approach, such information (effort and harvest) was used to calculate the CPUE per depth and species.
Instead of directing fishers to specific environmental factors assumed to be relevant to affect their fishing, fishers were simply asked what environmental factors they considered adverse for fishing. For each factor that the fishers mentioned, they were inquired about the period of the year when it most commonly happens. As expected, based on on-board observations, strong currents and large amounts of drifting algae were the main factors mentioned by the fishers (62% mentioned at least one of these factors). Fishers said that these two factors hamper fishing operations and can even result in the loss of gillnets. Therefore, only data for these two environmental factors are presented. The fishers were also asked when each of the three species was most abundant along the year.
Fishers were asked about periods in two moments of the interview: the periods of adverse environmental factors to fishing and the peak periods for catching the three target species. If, instead of answering a specific month(s) for either of these questions, the interviewee answered with a season (dry or wet season), he was asked to identify the months that were most typically considered dry and wet, in his own perception.
We used the number of times the fishers cited a given month to calculate the seasonal frequency of the environmental factors (currents and drifting algae) affecting fisheries and of when each of the three target species were considered more abundant. For instance, if a fisher said that wet months were those between April and August, these months received "1", and the remaining months "0". This was done for every fisher and for the five questions regarding: wet season, dry season, months of strong currents, months of drifting algae, and months of higher catches for the three main species. We used the Pearson correlation method to check the relationship between the absolute frequency of answers each month got with the incidence of environmental factors and occurrences (also measured as number of times cited) of the species (Fig. 2). The precipitation pattern was provided by EMBRAPA (Brazilian Agricultural Research Corporation) for the period 2012 June to 2014 June.
Map of the study area. Municipalities are highlighted by the limits and numbered in ascending order from N to S. Touros – 1, Rio do Fogo – 2, Maxaranguape – 3, Natal – 4, Tibau do Sul – 5 and Baía Formosa – 6
Pearson correlations (r) run in the study
The presence of a species at different depths (6 to 50 m) was correlated between LEK and SK by the Spearman correlation test, with depths paired and sorted in ascending order. We also evaluated if the occurrence of a species at a given depth was correlated amongst methods (LEK and SK).
Another Spearman correlation was done to verify if the depths of larger and smaller CPUE are correlated amongst methods (LEK and SK). This was done after ordering the CPUE from the highest to the lowest value.
Due to the differences in biomass scales obtained between methods (fishers tend to cite much higher catches in the LEK method than what was observed in the SK), the CPUE was transformed by log10(CPUE + 1) to make them comparable, as the idea was to compare catch patterns and not absolute values by depth between methods.
The fishing catches analyzed on site resulted in 3,732 fish individuals of 93 species, totalling 2193.3 kg. As specified earlier, three species were the most common and most abundant in the catches: blue runner (Caranx crysos = 16%), serra Spanish mackerel (Scomberomorus brasiliensis = 11%) and lane snapper (Lutjanus synagris = 7%), representing approximately 37% of the individuals and 34% of the catch weight.
The fishers were accurate at recognizing the rainy season (Fig. 3a), which was observed by the high correlation between the months they mentioned and the most intense period of rainfall (between May and July) observed between 2012 and 2014 (r = 0.90; p = 4,82E-05) (Fig. 3a and b; Table 1). Most of them (62%) cited that the rainy season is also when currents are the strongest and when drifting algae are more abundant (Table 1).
Relative frequency of rainfall in the year, as perceived by fishers (a) and recorded monthly by the local meteorological institution between 2012 and 2014 (EMPARN) (b)
Table 1 Significant Pearson correlations. All fish data refer to absolute frequency
The season for serra Spanish mackerel, according to the fishers, was correlated both with their identification of the rainy season and with the actual registered rainy season. The presence of this species was also correlated with fishers' perception of drifting algae and strong currents periods.
Fishers established clear occurrence patterns for the three species, which was confirmed for two of them (Table 1). The serra Spanish mackerel was expected by them to be more abundant in the rainy season, whereas the blue runner was expected to present two peaks of abundance, one in the dry season up to December and another in June, the peak of the rainy season. Lane snapper was expected to be majorly abundant in the dry season, although some fishers mentioned its occurrence in the rainy season as well (Fig. 4).
Monthly occurrence of the three main species. The solid line represents the catches reported by fishers (LEK), and the dotted line represents catches actually registered on board (SK)
On the other hand, when variables classified as SK were correlated between themselves (see Fig. 2), only the correlation between precipitation and frequency of serra Spanish mackerel in the fishing confirmed that indeed this species was more common in the rainy season, as previously suggested by fishers. The SK correlations suggested an additional correlation between serra Spanish mackerel and blue runner.
Spatial patterns
The occurrence of these species at different fishing depths, as registered on board, was significantly correlated to fishers' information (citations of the depth range where each species occurs) (serra Spanish mackerel r = 0.98, p = 9.7E-044; blue runner, r = 0.99, p = 2.1E-059; lane snapper, r = 0.99, p = 3.8E-055). Therefore, fishers were accurate at reporting the depth that each of these species would be more commonly found. Both fishers' information and actual fishing observation suggest a concentration of boats in shallower waters, among the isobaths of 10 and 20 m (Fig. 5a). Also, for the fishers, currents get stronger and algae become more abundant with increasing depths (r = 0.93, p = 9.253E-006; r = 0.88, p = 1.498E-004, respectively). Such fact limits their fishing to shallower waters (95% of them did not use deeper areas), even though the fishers claimed that deeper waters are best for fishing (X 2 = 15.36; p = 8.88E-05).
Bathymetric patterns of species distribution in occurrence (a) and CPUE (b), based on information obtained from fishers (LEK) and from on-board observations (SK)
On the other hand, the CPUE estimated from fishers' information was significantly different from the one calculated from on-board observations (T-test serra Spanish mackerel p = 0.002; blue runner p = 0.001; lane snapper p = 1.3E-005). Fishers' estimates resulted in higher means and standard deviations than the actual CPUE, even after transformation of the data (Table 2).
Table 2 Variation of the CPUE (Kg/Km2*h) for the three main species, according to fishers' information and according to data registered on board
When the values of CPUE by depth were ranked from the highest to the lowest, higher values of CPUE were reported for deeper areas in both methods, with significant correlations between the fishers' information and the observations on-board for the most representative species.
The correlation of CPUE per depth between methods was higher for more valuable commercial species. Serra Spanish mackerel is the most sought species because of its quality and abundance (r = 0.473; p = 0.00004), whereas lane snapper is the priciest and is present more frequently in the fishing, but is the least abundant in catches (r = 0.297; p = 0.012). On the other hand blue runner was weakly correlated between methods. This species had the highest abundance observed in catches, but the lowest commercial value.
On-board observations agreed with fishers' claim that stated that fishing frequency decreased with increasing depth after 20 m. On the other hand, the CPUE of the three analyzed species were larger in deeper areas (Fig. 5a and b).
Overlapping spatial information
A complementary pattern was observed when both methods were overlapped. Blue runner had the widest bathymetric distribution, occurring from 10 to 50 m deep. For fishers, the CPUE of this species increased up to intermediate depths, between 30 and 40 m, whereas on-board data suggested a decrease in the CPUE after 50 m. (Fig. 5b). The observed bathymetric occurrence of lane snappers also coincided with what fishers suggested, especially regarding the interval between 10 and 20 m deep. The CPUE showed no clearly defined bathymetric patterns. However, both fishers and actual observation suggested that larger catches of lane snapper were observed around 30 m deep. According to fishers this species often occurs in catches with bottom-set gillnets, although the catches are not usually large. Neither of the methods provided a reliable pattern for the isobaths between 30 and 40 m for both the blue runner and the lane snapper, presumably due to the low frequency of fishing in such deeper areas (Fig. 5b). Serra Spanish mackerel, which showed congruent information for both methods regarding occurrence and CPUE, was mentioned by fishers and also confirmed by on-board observation to have its highest catches in waters 20 m deep.
This study shows that the use of local ecological knowledge (LEK) approach can provide reliable information on fish seasonal and spatial patterns equivalent to conventional scientific methods in fisheries science, although with somewhat wider variation. Besides, such approach is less time-consuming and less expensive than the traditional ones used in fishery sciences [34], and not limited to the direct observation of fishing operations or fish landing sampling.
The temporal record of fishing that fishers can offer provides a wealth of accurate details to understand the dynamics of fisheries and of environmental marine resources. This method, even if applied to few interviewees, can identify seasonal patterns for target species, which in this case was clear mostly for serra Spanish mackerel and lane snapper. On the other hand, on-board data, when done along a short period of time, may not be enough to establish seasonal patterns, probably because, in this case study, bottom-set gillnet is a multi-specific gear and also because there is wide variation in seasonality from year to year, affecting species patterns in the short term [49].
Fishers' knowledge tends to be spatially localized and seasonal. It is primarily acquired during observations that take place on fishing grounds during a given fishing season [50]. The accumulated experiences of fishers probably reflect the seasonal patterns accurately. The experience shared by many fishers on a daily base helps them build consensus, which can be extracted even from a small number of informants [51]. These shared experiences are based on a wide range of trials and errors adjusted individually and collectively to optimize catches with lower fishing effort.
The seasonal patterns of species occurrence described in the literature and fishers' knowledge gathered here are in agreement. The higher occurrence of blue runner and lane snapper in the dry season is related to their reproductive period, which goes from early spring to late summer, as noted in the Mediterranean [52] and in Venezuela [53]. For the serra Spanish mackerel, their migration has been related to feeding during the dry season (March to August) in northern Brazil [54] and to reproduction during the rainy season (May to August) in the northeastern Brazilian coast [48].
Fish abundance is always changing: populations are patchily distributed and such distribution, depending on the scale, can vary daily, seasonally, inter-annually, and from decade to decade in relation to naturally varying conditions, as well as in response to human influences. Isolated observations may therefore have little value in evaluating change [55], in which case the long-term observations provided by fishers may be useful, as long as potential biases are considered. For example, fishers' knowledge has been shown to be more accurate when reporting extreme positive events related to abundance, such as their best individual catch ever for each species [32].
Here, it is also shown that fishers can provide reliable data regarding patterns of occurrence and catches at different depths, as this information was confirmed by on-board observations. Confirming one of the hypotheses, fishers did perceive deeper areas as more productive, but fishing in such areas was limited by strong currents and the presence of drifting algae, forcing fishers to use shallower and less productive waters. Currents have known influences on the oceanographic dynamics of the continental shelf [46, 47]. One such influence is likely to be changes in the abundance of drifting algae [56, 57], although this is not known in the literature for the studied coast. The effects of drifting algae on fisheries are not known in the global literature either. Only through LEK it was possible to understand the choice fishers make when chosing their grounds, reinforcing the idea that some areas are protected against overexploitation by natural and technological circumstances, such as boat size, inaccessibility and by adverse enviromental factors [58].
According to the fishers, strong currents force the gillnets onto the ground, which results in considerable loss of their catches, besides making it difficult to pull the net back. In the event of very strong currents, nets can be ripped or even lost. On the other hand, drifting algae do not affect catches as much as it affects the difficulty of pulling the net back, due to their extra weight. In both cases the problems are intensified in deeper waters. However, it was not possible to quantify such influence during the on-board observations.
The high occurrence of strong currents and drifting algae in deeper areas was probably responsible for the fewer observed net settings in these areas, despite the fact that both the observed data and fishers' LEK suggested that these were more productive sites. This is especially clear for blue runner, which was not caught between the 30 and 50 isobaths, but was mentioned by fishers to be productive in such depths. However, for the three species analyzed there is either no (serra Spanish mackerel) or very little information (blue runner and lane snapper) to confirm if such fishes are really not abundant in depths above 30 m, as fishers rarely fished in such areas. As expected, fishers rarely gave information regarding productivity in such depths.
Fishing at intermediary depths is usually unrelated to the fishing gear used, but associated to the ecological characteristics of the species being targeted [59]. For instance, in the Baleares Islands in the Mediterranean, fishers trawl at the slope of the continental shelf due to the biological features of the target species, which is more abundant in such depths [59]. Another study also suggested higher CPUE at 25–50 m deep for the yellow tail snapper Ocyurus chrysurus (Bloch, 1791) caught by small-scale fisheries using hook and line on the Brazilian northeast [60]. Such results led the authors to suggest that intermediary depths should be managed to protect this snapper.
Overlapping approaches
Fishers are a recognized source of marine knowledge, and they need to be involved in the collection and evaluation of their experience in fishery management [61]. Scientists should consider fishers' knowledge, especially when scientific knowledge about ecosystem functions is insufficient to provide unambiguous answers to management problems [62]. The complementarity of SK and LEK approaches generate more robust information and that could lead us a step further in the decision-making process. Some researches believe that understanding key social-ecological linkages could support a transition toward sustainability in small-scale fisheries. Such claim is based on the importance of partnerships that transcend disciplines and conventional approaches involving multiple stakeholders to work collaboratively toward sustainable strategies [63, 64].
The recovery of the fishers' memories reported here was generally consistent with what was observed on board (high biomass at intermediate depths), besides providing additional information not available scientifically (seasonality of species and limited fishing areas caused by enviromental factors). Even though the onboard data comprised short time frame and interviews are somewhat subjectives for relying on people's memories, they presented convergent and reliable information regarding the concentration of fishing effort mostly in shallower depths. This approach has the advantage of incorporating unreported catches that would be otherwise lost in conventional methods [34].
Fishers were more accurate at describing the bathymetric pattern of CPUE for species of high commercial value, namely serra Spanish mackerel and lane snapper. For the blue runner, the methods (SK and LEK) did not agree, even though this was an abundant species in the catches. Perhaps, its cheaper price makes less of a memorable impression than its large catches. There is some evidence that fishers' perception is more accurate for commercial species or at least those with remarkable large and recent catches [32]. Researchers agree that a systematic approach during the interviews is crucial. Fishers tend to feel more at easy when they are asked specific questions, such as about an exceptional event (e.g., their best catch ever) or about a typical catch at different times of their lives or at different seasons, rather than vague questions (e.g., how much the catch has changed) [34, 65].
However, even if such care is taken into the formulation of a question, fishers seem to overestimate typical catches, whereas they tend to recall their best catch better. This is attributed to a common cognitive phenomenon, known as "flashbulb" memory, which are those past events with personal importance and/or with striking consequences [66]. On the other hand, the typical catch, although more recurrent, may be hard to be accessed. However the approach of patterns in the typical catches data used in this study showed consistent information about spatial and temporal distribution of main species caught with bottom-set gill net.
It is also important to acknowledge the relevance of considering the expertise of those being interviewed. Fishers' experience depends on several factors such as their years of active fishing, gear and exploited area [36, 61]. For this reason, in the present study knowledgable fishers and fishing masters were the only ones interviewed. Specifically, fishing masters, who comprised most of the sample, tend to own the nets, and are responsible for choosing the nets to be used, which determine where those will be set and which fishing resources will be targeted. Therefore, their expertise encompasses a very particular and detailed knowledge of the local environmental conditions and of the ecological relations happening on a given site [36]. Here specifically it has been shown that fishers that have been fishing for about 30 years can identify seasonal and spatial patterns of occurrence of their main target species. However, it is not possible to confirm that less experienced fishers would not provide the same type of information, as this information was not compared across ages.
Finally, the integration of different types of knowledge has direct implications for an EAF. For instance, if all the information used in management were based on SK, it would be known that most effort is concentrated in shallower waters, and therefore such areas would require specific protection. Whereas ecological appropriate, such measures would be socially harmful [67], since the fact that fishers limit their effort to shallow waters is not necessarily because these are productive grounds, but because they face technological and ecological limitations. Such information could only be known through LEK, reinforcing the need of having integrative and participatory management systems [30, 43, 44, 68, 69].
This study showed that the integration of conventional fishing approaches with experiences accumulated by fishers reveal a great influence of seasonal and spatial dynamics of marine environmental factors causing higher fishing pressure in shallow and usually poorer waters. Besides, it revealed the importance of having direct on-board observation not only to produce more realistic and detailed data, but also as a way to confirm the factors that hamper fishing operations. Once established the accuracy of LEK in relation to SK, LEK may be used to gather a set of reliable information for fisheries management through well-structured interviews capable of quickly revealing ecological patterns of target species. This is not to say that one type of knowledge is superior to another, but that their integration might be the best path for fisheries sustainability. Translating LEK into an accessible language to scientists is likely also an important step to achieve its integration into management and to provide a more holistic and more realistic understanding of fishing. We therefore advocate for a continuous policy of fish landing sampling that contemplates effort data on bathymetric and general oceanographic conditions, but that also includes LEK to understand how such conditions interfere with fisheries.
Specifically, the fishing patterns observed in areas less exploited due to environmental limitations are important in fishing zone selection for the management of fishing bottom set gillnet and to prevent the emergence of ghost nets caused by the loss of nets on the seabed that continue killing marine organisms indefinitely. These patterns need to be further investigated by joining fishers and landing observations over large spatial and temporal scales. Besides, additional research should use LEK to identify other environmental limitants on fishing effort and production that could be used as stepstones to management.
The results showed here confirm that fishers do detain an important body of knowledge that could support faster and more affordable management initiatives. Moreover, fishers could certainly contribute with additional information where there is no official statistics. As science advances, it becomes clearer that fishers can enhance our understanding of marine ecosystem dynamics and of fisheries in general, which is not easily or cheaply achieved solely by conventional approaches.
Oliveira Freitas M, Leão de Moura R, Bastos Francini-Filho R, Viviana Minte-Vera C. Spawning patterns of commercially important reef fish (Lutjanidae and Serranidae) in the tropical western South Atlantic. Sci. 2011;75:135–46.
Reuchlin-Hugenholtz E, Shackell NL, Hutchings JA. The Potential for Spatial Distribution Indices to Signal Thresholds in Marine Fish Biomass. PLoS One. 2015;10:e0120500.
Erisman BE, Apel AM, MacCall AD, Román MJ, Fujita R. The influence of gear selectivity and spawning behavior on a data-poor assessment of a spawning aggregation fishery. Fish Res. 2014;159:75–87.
Gomez C, Williams AJ, Nicol SJ, Mellin C, Loeun KL, Bradshaw CJA. Species Distribution Models of Tropical Deep-Sea Snappers. PLoS One. 2015;10:e0127395.
Mustamaki N, Jokinen H, Scheinin M, Bonsdorff E, Mattila J. Seasonal small-scale variation in distribution among depth zones in a coastal Baltic Sea fish assemblage. ICES J Mar Sci. 2015. doi:10.1093/icesjms/fsv068.
Frédou T, Ferreira BP. Bathymetric trends of northeastern Brazilian snappers (Pisces, Lutjanidae): implications for the reef fishery dynamic. Braz Arch Biol Technol. 2005;48:787–800.
Shimose T & David Wells RJ. Feeding Ecology of Bluefin Tunas. In: Biology and Ecology of Bluefin Tuna. T Kitagawa & S Kimura (eds.). CRC Press, Boca Raton; 2015. pp. 78–97. doi:10.1201/b18714-7.
Shepherd TD, Litvak MK. Density-dependent habitat selection and the ideal free distribution in marine fish spatial dynamics: considerations and cautions. Fish Fish. 2004;5:141–52.
Beddington JR, May RM. Harvesting natural populations in a randomly fluctuating environment. Science. 1977;197:463–5.
Cury PM, Christensen V. Quantitative ecosystem indicators for fisheries management. ICES J Mar Sci J Cons. 2005;62:307–10.
Hsieh C, Reiss CS, Hunter JR, Beddington JR, May RM, Sugihara G. Fishing elevates variability in the abundance of exploited species. Nature. 2006;443:859–62.
Pikitch E, Santora C, Babcock EA, Bakun A, Bonfil R, Conover DO, et al. Ecosystem-based fishery management. Science. 2004;305:346–7.
Cryer M, Mace PM, Sullivan KJ. New Zealand's ecosystem approach to fisheries management. Fish Oceanogr. 2016;25:57–70.
Garcia S, Cochrane K. Ecosystem approach to fisheries: a review of implementation guidelines. ICES J Mar Sci. 2005;62:311–8.
Jennings S. Indicators to support an ecosystem approach to fisheries. Fish Fish. 2005;6:212–32.
Johannes R. Ignore fishers' knowledge and miss the boat. Fish Fish. 2000;1:257–71.
Johannes R. The case for data-less marine resource management: examples from tropical nearshore finfisheries. Trends Ecol Evol. 1998;13:243–6.
FAO. Fisheries management. The ecosystem approach to fisheries. Human dimensions of the ecosystem approach to fisheries. 2009. http://www.fao.org/3/04071315-66b5-511b-8278-7966ca026b4b/i1146e00.pdf. Accessed 8 Jun 2016.
Hazin FHV, Broadhurst MK, Hazin HG. Preliminary Analysis of the Feasibility of Transferring New Longline Technology to Small Artisanal Vessels off Northeastern Brazil. Mar Fish Rev. 2000. http://aquaticcommons.org/9757/1/mfr6213.pdf.
Norse EA, Brooke S, Cheung WWL, Clark MR, Ekeland I, Froese R, et al. Sustainability of deep-sea fisheries. Mar Policy. 2012;36:307–20.
Kosamu IBM. Conditions for sustainability of small-scale fisheries in developing countries. Fish Res. 2015;161:365–73.
Carter J, Perrine D. A spawning aggregation of dog snapper, Lutjanus jocu (Pisces: Lutjanidae) in Belize, Central America. Bull Mar Sci. 1994;55:228–34.
Andrade H, Santos J, Taylor R. Life-history traits of the common snook Centropomus undecimalis in a Caribbean estuary and large-scale biogeographic patterns relevant to management. J Fish Biol. 2013;82:1951–74.
Teixeira SF, Ferreira BP, Padovan IP. Aspects of fishing and reproduction of the black grouper Mycteroperca bonaci (Poey, 1860) (Serranidae: Epinephelinae) in the Northeastern Brazil. Neotropical Ichthyol. 2004;2:19–30.
da Batista V, FABRÉ NN. Temporal and spatial patterns on serra, Scomberomorus brasiliensis (Teleostei, Scombridae), catches from the fisheries on the Maranhão coast, Brazil. Braz J Biol. 2001;61:541–6.
Silvano RAM, MacCord PFL, Lima RV, Begossi A. When Does this Fish Spawn? Fishermen's Local Knowledge of Migration and Reproduction of Brazilian Coastal Fishes. Environ Biol Fishes. 2006;76:371–86.
Pauly D, Hilborn R, Branch TA. Fisheries: does catch reflect abundance? Nature. 2013;494:303–6.
Worm B, Hilborn R, Baum JK, Branch TA, Collie JS, Costello C, et al. Rebuilding Global Fisheries. Science. 2009;325:578–85.
Worm B, Barbier EB, Beaumont N, Duffy JE, Folke C, Halpern BS, et al. Impacts of Biodiversity Loss on Ocean Ecosystem Services. Science. 2006;314:787–90.
Berkes F. Evolution of co-management: Role of knowledge generation, bridging organizations and social learning. J Environ Manage. 2009;90:1692–702.
Berkes F. Alternatives to Conventional Management: Lessons from Small Scale Fisheries. Environments. 2003;3:5–19.
de Damasio L. MA, Lopes PFM, Guariento RD, Carvalho AR. Matching Fishers' Knowledge and Landing Data to Overcome Data Missing in Small-Scale Fisheries. PLoS One. 2015;10:e0133122.
Rosa R, Carvalho AR, Angelini R. Integrating fishermen knowledge and scientific analysis to assess changes in fish diversity and food web structure. Ocean Coast Manag. 2014;102:258–68.
Tesfamichael D, Pitcher TJ, Pauly D. Assessing Changes in fisheries using fishers' knowledge to generate long time series of catch rates: a case study from the Red Sea. Ecol Soc. 2014;19. doi:10.5751/ES-06151-190118.
Pauly D. Anecdotes and the shifting baseline syndrome of fisheries. Trends Ecol Evol. 1995;10:430.
Davis A, Wagner JR. Who knows? On the importance of identifying "experts" when researching local ecological knowledge. Hum Ecol. 2003;31:463–89.
Turvey ST, Barrett LA, Yujiang H, Lei Z, Xinqiao Z, Xianyan W, et al. Rapidly Shifting Baselines in Yangtze Fishing Communities and Local Memory of Extinct Species: Rapidly Shifting Baselines. Conserv Biol. 2010;24:778–87.
Bevilacqua AHV, Carvalho AR, Angelini R, Christensen V. More than Anecdotes: Fishers' Ecological Knowledge Can Fill Gaps for Ecosystem Modeling. PLoS One. 2016;11:e0155655.
Davis A, Hanson JM, Watts H, MacPherson H. Local ecological knowledge and marine fisheries research: the case of white hake (Urophycis tenuis) predation on juvenile American lobster (Homarus americanus). Can J Fish Aquat Sci. 2004;61:1191–201.
Duda AM, Sherman K. A new imperative for improving management of large marine ecosystems. Ocean Coast Manag. 2002;45:797–833.
Berkes F, Colding J, Folke C. Rediscovery of Traditional Ecological Knowledge as Adaptive Management. Ecol Appl. 2000;10:1251–62.
Berkes F. Shifting perspectives on resource management: resilience and the reconceptualization of "natural resources" and "management.". Mast. 2010;9:13–40.
Linke S, Bruckmeier K. Co-management in fisheries – Experiences and changing approaches in Europe. Ocean Coast Manag. 2015;104:170–81.
Wiber M, Charles A, Kearney J, Berkes F. Enhancing community empowerment through participatory fisheries research. Mar Policy. 2009;33:172–9.
Brahmananda Rao V, de Lima MC, Franchito S. Seasonal and interannual variations of rainfall over eastern northeast Brazil. J Clim. 1993;6:1754–63.
Testa V, Bosence DW. Physical and biological controls on the formation of carbonate and siliciclastic bedforms on the north-east Brazilian shelf. Sedimentology. 1999;46:279–301.
Vital H, editor. Recent advances in models of siliciclastic shallow-marine stratigraphy. Tulsa: SEPM (Society for Sedimentary Geology); 2008.
Lessa RP, de Nóbrega MF, Junior JB. Dinâmica das frotas pesqueiras da região Nordeste do Brasil. Ministério do Meio Ambiente: Recife; 2004. http://www.mma.gov.br/estruturas/revizee/_arquivos/din_frota_pesq.pdf. Accessed 2 May 2016.
Burgess MG, Polasky S, Tilman D. Predicting overfishing and extinction threats in multispecies fisheries. Proc Natl Acad Sci. 2013;110:15943–8.
Neis B, Schneider DC, Felt L, Haedrich RL, Fischer J, Hutchings JA. Fisheries assessment: what can be learned from interviewing resource users? Can J Fish Aquat Sci. 1999;56:1949–63.
Romney AK, Weller SC, Batchelder WH. Culture as consensus: A theory of culture and informant accuracy. Am Anthropol. 1986;88:313–38.
Sley A, Jarboui O, Ghorbel M, Bouain A. Food and feeding habits of Caranx crysos from the Gulf of Gabès (Tunisia). J Mar Biol Assoc U K. 2009;89:1375.
Gómez G, Guzmán R, Chacón R. Parámetros reproductivos y poblacionales de Lutjanus synagris en el Golfo de Paria, Venezuela. Zootecnia Tropical. 2001;3:335–57.
Batista MI. Horta e Costa B, Gonçalves L, Henriques M, Erzini K, Caselle JE, et al. Assessment of catches, landings and fishing effort as useful tools for MPA management. Fish Res. 2015;172:197–208.
Koslow JA, Couture J. Pacific Ocean observation programs: Gaps in ecological time series. Mar Policy. 2015;51:408–14.
Norkko J, Bonsdorff E, Norkko A. Drifting algal mats as an alternative habitat for benthic invertebrates. J Exp Mar Biol Ecol. 2000;248:79–104.
Yamasaki M, Aono M, Ogawa N, Tanaka K, Imoto Z, Nakamura Y. Drifting algae and fish: Implications of tropical Sargassum invasion due to ocean warming in western Japan. Estuar Coast Shelf Sci. 2014;147:32–41.
Planque B, Fromentin J-M, Cury P, Drinkwater KF, Jennings S, Perry RI, et al. How does fishing alter marine populations and ecosystems sensitivity to climate? J Mar Syst. 2010;79:403–17.
Moranta J, Stefanescu C, Massutí E, Morales-Nin B, Lloris D. Fish community structure and depth-related trends on the continental slope of the Balearic Islands (Algerian basin, western Mediterranean). Mar Ecol Prog Ser. 1998;171:247–59.
Nóbrega MF, Kinas PG, Ferrandis E, Lessa RP. Distribuição espacial e temporal da guaiúba Ocyurus chrysurus (Bloch, 1791) (Teleostei, Lutjanidae) capturada pela frota pesqueira artesanal na região nordeste do Brasil. Pan-Am J Aquat Sci. 2009;4:17–34.
Johannes RE, Neis B, editors. CHAPTER 1: The value of anecdote. Paris: Unesco Publ; 2007.
Ruddle K, Hickey FR. Accounting for the mismanagement of tropical nearshore fisheries. Environ Dev Sustain. 2008;10:565–89.
Kittinger JN, Finkbeiner EM, Ban NC, Broad K, Carr MH, Cinner JE, et al. Emerging frontiers in social-ecological systems research for sustainability of small-scale fisheries. Curr Opin Environ Sustain. 2013;5:352–7.
Ostrom E. A General Framework for Analyzing Sustainability of Social-Ecological Systems. Science. 2009;325:419–22.
O'Donnell KP, Molloy PP, Vincent ACJ. Comparing Fisher Interviews, Logbooks, and Catch Landings Estimates of Extraction Rates in a Small-Scale Fishery. Coast Manag. 2012;40:594–611.
Hirst W, Phelps EA. Flashbulb Memories. Curr Dir Psychol Sci. 2016;25:36–41.
Begossi A, da Silva AL, editors. Ecologia de pescadores da Mata Atlântica e da Amazônia. São Paulo: Editora Hucitec; 2004.
Jentoft S. Fisheries co-management as empowerment. Mar Policy. 2005;29:1–7.
Lopes PFM, Rosa EM, Salyvonchyk S, Nora V, Begossi A. Suggestions for fixing top-down coastal fisheries management through participatory approaches. Mar Policy. 2013;40:100–10.
We thank all the fishers who generously and voluntarily contributed with valuable knowledge acquired through the practice of fishing. Many thanks to Leonardo Calado, Rayssa Melo, Adriana Guzman Maldonado and Wellington for helping in the fieldwork.
CAPES (PNPD Institutional Project – 2785/2011) funded this research through a Post-Doc grant to MFN and fielwork support.
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request, without disclosure of the interviewees.
MSPL planned the study, performed the fielword, analyzed the data and wrote the manuscript. JELO discussed ther results and helped write the manuscript. MFN planned the fieldwork, dealt with the logistics, discussed the results and helped write the manuscript. PFML planned the study, directed the analyses, and wrote the manuscript. All authors read and approved the final manuscript.
Due to the low literacy rate, we opted for an oral consent by the interviewee. Fishers were explained that they had the right to refuse participation, that their information was anonymous, and that it was only to be used in the research. We explained the goals of the research to each fisher individually, when we also had a chance to answer questions and doubts.
Department of Oceanography and Limnology at the Federal University of Rio Grande do Norte, Natal, Brazil
Mauro Sergio Pinheiro LIMA
, Jorge Eduardo LINS OLIVEIRA
& Marcelo Francisco de NÓBREGA
Fishing Ecology, Management and Economics group, Department of Ecology at the Federal University of Rio Grande do Norte, Natal, RN, Brazil
Priscila Fabiana Macedo LOPES
Search for Mauro Sergio Pinheiro LIMA in:
Search for Jorge Eduardo LINS OLIVEIRA in:
Search for Marcelo Francisco de NÓBREGA in:
Search for Priscila Fabiana Macedo LOPES in:
Correspondence to Mauro Sergio Pinheiro LIMA.
Interview form (DOCX 36 kb).
LIMA, M.S.P., OLIVEIRA, J.E.L., de NÓBREGA, M.F. et al. The use of Local Ecological Knowledge as a complementary approach to understand the temporal and spatial patterns of fishery resources distribution. J Ethnobiology Ethnomedicine 13, 30 (2017) doi:10.1186/s13002-017-0156-9
Artisanal fishing
Northeast Brazil
Caranx crysos
Scomberomorus brasiliensis
Lutjanus synagris | CommonCrawl |
\begin{document}
\begin{abstract} Work in the measure algebra of the Lebesgue measure on \( \pre{\omega}{2} \): for comeager many \( \eq{A} \) the set of points \( x \) such that the density of \( x \) at \( A \) is not defined is \( \boldsymbol{\Sigma}^{0}_{3} \)-complete; for some compact \( K \) the set of points \( x \) such that the density of \( x \) at \( K \) exists and it is different from \( 0 \) or \( 1 \) is \( \boldsymbol{\Pi}^{0}_{3} \)-complete; the set of all \( \eq{K} \) with \( K \) compact is \( \boldsymbol{\Pi}^{0}_{3} \)-complete. There is a set (which can be taken to be open or closed) in \( \mathbb{R}^n \) such that the density of any point is either \( 0 \) or \( 1 \), or else undefined. Conversely, if a subset of \( \mathbb{R}^n \) is such that the density exists at every point, then the value \( 1/2 \) is always attained. On the route to this result we show that Cantor space can be embedded in a measured Polish space in a measure-preserving fashion. \end{abstract} \title{Lebesgue density and exceptional points}
\section{Statement of the main results} In this paper we study from the point of view (and with the methods) of descriptive set theory, some questions stemming from real analysis and measure theory. In order to state our results we recall a few definitions. The density of a measurable set \( A \) at a point \( x \in X \) is the limit \( \mathscr{D}_A ( x ) = \lim_{ \varepsilon {\downarrow} 0 } \mu ( A \cap \Ball ( x ; \varepsilon ) ) / \mu ( \Ball ( x ; \varepsilon ) ) \), where \( \mu \) is a Borel measure on the metric space \( X \) and \( \Ball ( x ; \varepsilon ) \) is the open ball centered at \( x \) of radius \( \varepsilon \). Let \( \Sharp ( A ) \) be the collection of all points \( x \) where \( 0 < \mathscr{D}_A ( x ) < 1 \), and let \( \Blur ( A ) \) be the collection of all points \( x \) where the limit \( \mathscr{D}_A ( x ) \) does not exist. The Lebesgue density theorem says that \( A \mathop{\triangle} \setofLR{x \in X}{ \mathscr{D}_A ( x ) = 1 } \) is null, and hence \( \Blur ( A ) \cup \Sharp ( A ) \) is null, when \( ( X , d , \mu ) \) is \emph{e.g.} the Euclidean space \( \mathbb{R}^n \) with the usual distance and the Lebesgue measure, or the Cantor space \( \pre{\omega }{2} \) with the usual ultrametric and the coin-tossing measure. If \( \Blur ( A ) = \emptyset \), i.e. \( \mathscr{D}_A ( x ) \) exists for any \( x \), then \( A \) is said to be solid; at the other extreme of the spectrum there are the spongy sets, that is sets \( A \) such that there are no points of intermediate density and there are points \( x \) where \( \mathscr{D}_A ( x ) \) does not exist, i.e., \( \Sharp ( A ) = \emptyset \) and \( \Blur ( A ) \neq \emptyset \). (Examples of solid sets are the balls in \( \mathbb{R}^n \) and the clopen sets in the Cantor space; it is not hard to construct a spongy set in the Cantor space, but the case of \( \mathbb{R}^n \) is another story.) All these notions are invariant under perturbations by a null set, so they can be defined on the measure algebra \( \MALG ( X , \mu ) \).
We prove a few results on these matters. Theorem~\ref{thm:KisPi03complete} shows that for a large class of spaces \( ( X , d , \mu ) \), the set \( \mathscr{K} \) of all \( \eq{K} \in \MALG \) with \( K \) compact is in \( \Fsigmadelta \setminus \Gdeltasigma \), i.e.~it is \( \boldsymbol{\Pi}^{0}_{3} \)-complete, in the logicians' parlance. The result still holds for \( \mathscr{F} \) the set of all \( \eq{F} \in \MALG \) with \( F \) closed. The result is first proved for the Cantor space \( \pre{\omega}{2} \) with the usual coin-tossing measure, and then extended to the general case by means of a construction enabling us to embed the Cantor space into \( ( X , \mu ) \) in a measure preserving way (Theorem~\ref{thm:embeddingCantorinPolish}). Restricting ourselves to the Cantor space, we show that for comeager many \( \eq{A} \in \MALG \) the set \( \Blur ( A ) \) is \( \Gdeltasigma \setminus \Fsigmadelta \), i.e.~\( \boldsymbol{\Sigma}^{0}_{3} \)-complete (Theorem~\ref{thm:blurrypointsSigma03}), and that \( \Sharp ( K ) \) is \( \boldsymbol{\Pi}^{0}_{3} \)-complete, for some compact set \( K \) (Theorem~\ref{thm:sharppointsPi03}). Finally we address the issue of solid and spongy sets in Euclidean spaces: we show that if \( A \) is solid, then it has density \( 1 / 2 \) at some point (Corollary~\ref{cor:nodualisticsetsinRn}), and that spongy sets exist (Theorem~\ref{thm:spongy}).
The paper is organized as follows. Section~\ref{sec:notationsandpreliminaries} collects some standard facts and notations used throughout the paper, while Section~\ref{sec:densityfunction} summarizes the basic results on the density function and the Lebesgue density theorem; these two section can be skipped on first read. Section~\ref{sec:Cantorsets} is devoted to the problem of embedding the Cantor space in a Polish space, while a characterization of compact sets in the measure algebra is given in Section~\ref{sec:compactsetsinMALG}. Section~\ref{sec:exceptionalpoints} is devoted to the study of \( \Blur ( A ) \) and \( \Sharp ( A ) \), while the study of solid sets in \( \mathbb{R}^n \) and the construction of spongy subset of \( \mathbb{R}^n \) is carried out in Section~\ref{sec:solid&spongy}.
\section{Notation and preliminaries}\label{sec:notationsandpreliminaries} The notation of this paper is standard and follows closely that of~\cite[][]{Kechris:1995kc,Andretta:2013uq}, but for the reader's convenience we summarize it below.
\subsection{Polish spaces} In a topological space \( X \), the closure, the interior, the frontier, and the complement of \( Y \subseteq X \) are denoted by \( \Cl Y \), \( \Int Y \), \( \Fr Y \), and \( Y^\complement \). A topological space is Polish if it is separable and completely metrizable. In a metric space \( ( X , d ) \), the open ball of center \( x \) and radius \( r \geq 0 \) is \( \Ball ( x ; r ) \), with the understanding that \( \Ball ( x ; 0 ) = \emptyset \). The collection \( \Bor ( X ) \) of all Borel subsets of \( X \) is stratified in the Borel hierarchy \( \boldsymbol{\Sigma}^{0}_{ \alpha } ( X ) \), \( \boldsymbol{\Pi}^{0}_{ \alpha } ( X ) \), \( \boldsymbol{\Delta}^{0}_{ \alpha } ( X ) \), with \( 1 \leq \alpha < \omega _1 \). Namely: \( \boldsymbol{\Sigma}^{0}_{1} \) is the collection of open sets, \( \boldsymbol{\Sigma}^{0}_{ \alpha } \) is the collection of sets \( \bigcup_{n} A_n \) with \( A_n \in \boldsymbol{\Pi}^{0}_{ \beta _n} \) and \( \beta _n < \alpha \), and \( \boldsymbol{\Pi}^{0}_{ \alpha } = \setofLR{ A^\complement }{ A \in \boldsymbol{\Sigma}^{0}_{ \alpha } } \). We also set \( \boldsymbol{\Delta}^{0}_{ \alpha } = \boldsymbol{\Sigma}^{0}_{ \alpha } \cap \boldsymbol{\Pi}^{0}_{ \alpha } \). Thus \( \boldsymbol{\Delta}^{0}_{1} \) are the clopen sets, \( \boldsymbol{\Pi}^{0}_{1} \) are the closed sets, \( \boldsymbol{\Sigma}^{0}_{2} \) are the \( \Fsigma \) sets, \( \boldsymbol{\Pi}^{0}_{2} \) are the \( \Gdelta \) sets, \( \boldsymbol{\Pi}^{0}_{3} \) are the \( \Fsigmadelta \) sets, and so on. The collections of all compact and of all \( \sigma \)-compact subsets of \( X \) are denoted by \( \KK ( X ) \) and \( \KK_ \sigma ( X ) \), respectively. If \( X \) is Polish, then \( \KK ( X ) \) endowed with the Vietoris topology is Polish.
A function \( f \colon X \to Y \) between Polish spaces is of \markdef{Baire class \( \xi \)} if the preimage of any open \( U \subseteq Y \) is in \( \boldsymbol{\Sigma}^{0}_{1 + \xi } \). The collection of all Baire class \( \xi \) functions from \( X \) to \( Y \) is denoted by \( \mathscr{B}_ \xi ( X , Y ) \) or simply by \( \mathscr{B}_ \xi \) when \( X \) and \( Y \) are clear from the context.
A \markdef{measurable space} \( ( X , \mathcal{S} ) \) consists of a \( \sigma \)-algebra \( \mathcal{S} \) on a nonempty set \( X \). A measurable space \( ( X , \mathcal{S} ) \) is \markdef{standard Borel} if \( \mathcal{S} \) is the \( \sigma \)-algebra of the Borel subsets of \( X \), for some suitable Polish topology on \( X \).
\subsection{Sequences and trees}\label{subsec:sequences&trees} \subsubsection{Sequences} The set of all functions from \( J \) to \( I \) is denoted by \( \Pre{ J }{ I } \). The set \( \pre{ < \omega }{I} = \bigcup_{n} \pre{ n}{ I } \) is the set of all finite sequences from \( I \), and \( \pre{ \leq \omega }{ I } = \pre{ < \omega }{ I } \cup \pre{ \omega }{ I } \). The \markdef{length} of \( x \in \pre{ \leq \omega }{I} \) is the ordinal \( \lh ( x ) = \dom ( x ) \). The \markdef{concatenation of \( s \in \pre{ < \omega }{I} \) with \( x \in \pre{ \leq \omega }{I} \)} is \( s {}^\smallfrown x \in \pre{ \leq \omega }{ I } \) defined by \( s {}^\smallfrown x ( n ) = s ( n ) \) if \( n < \lh ( s ) \), and \( s {}^\smallfrown x ( n ) = x ( i ) \) if \( n = i + \lh ( s ) \). We often blur the difference between the sequence \( \seq{ i } \) of length \( 1 \) with its unique element \( i \) and write \( t {}^\smallfrown i \) instead of \( t {}^\smallfrown \seq{ i } \). The sequence of length \( N \leq \omega \) that attains only the value \( i \) is denoted by \( i^{ ( N ) } \).
\subsubsection{Trees} A \markdef{tree} on a nonempty set \( I \) is a \( T \subseteq \pre{ < \omega }{I} \) closed under initial segments; the \markdef{body} of \( T \) is \( \body{T} = \setofLR{ b \in \pre{ \omega }{I} }{ \FORALL{n \in \omega } ( b \mathpunct{\upharpoonright} n \in T ) } \). A tree \( T \) on \( I \) is \markdef{pruned} if \( \FORALL{t \in T} \EXISTS{s \in T} ( t \subset s ) \). The set \( \body{T} \) is a topological space with the topology generated by the sets \[ {\boldsymbol N}\!_t ^{\body{T}}= {\boldsymbol N}\!_t = \setofLR{x \in \body{T} }{ x \supseteq t } \] with \( t \in T \). This topology is induced by the metric \( d_T ( x , y ) = 2^{-n} \) where \( n \) is least such that \( x ( n ) \neq y ( n ) \). This is actually a complete metric, and an ultrametric, i.e the triangular inequality holds in the stronger form \( d ( x , z ) \leq \max \setLR{ d ( x , y ) , d ( y , z ) } \). Therefore \( \body{T} \) is zero-dimensional, i.e. it has a basis of clopen sets. A nonempty closed subset of \( \body{T} \) is of the form \( \body{S} \) with \( S \) a pruned subtree of \( T \). If \( T \) is a tree on a countable set \( I \), then \( \body{T} \) is separable, and therefore it is a Polish space.
The \markdef{localization} of \( X \subseteq \pre{ \leq \omega }{I} \) at \( s \in \pre{ < \omega }{I} \) is \[ \LOC{X}{s} = \setofLR{ t \in \pre{ \leq \omega }{I } }{ s {}^\smallfrown t \in X } . \] Thus if \( A \subseteq \pre{\omega}{I} \) then \( s {}^\smallfrown \LOC{A}{s} = A \cap {\boldsymbol N}\!^{\mathcal{X}}_s \), where \( \mathcal{X} = \body{ \pre{ < \omega }{I} } \). Note that if \( T \) is a tree on \( I \) and \( t \in T \), then \( \body{\LOC{T}{t}} = \LOC{ \body{T}}{t} \).
A function \( \varphi \colon S \to T \) between pruned trees is \begin{itemize} \item \markdef{monotone} if \( s_1 \subseteq s_2 \Rightarrow \varphi ( s_1 ) \subseteq \varphi ( s_2 ) \), \item \markdef{Lipschitz} if it is monotone and \( \lh s \leq \lh \varphi ( s ) \), \item \markdef{continuous} if it is monotone and \( \lim_n \lh \varphi ( x \mathpunct{\upharpoonright} n ) = \infty \) for all \( x \in \body{S} \). \end{itemize} If \( \varphi \) is Lipschitz then it is continuous, and a continuous \( \varphi \) induces a continuous function \[ f _ \varphi \colon \body{S} \to \body{T} , \quad f _ \varphi ( x ) = \bigcup_{n} \varphi ( x \mathpunct{\upharpoonright} n ) , \] and every continuous function \( \body{S} \to \body{T} \) arises this way. If \( \varphi \) is Lipschitz, then \( f_ \varphi \) is Lipschitz with constant \( \leq 1 \), that is \( d_T ( f ( x ) , f ( y ) ) \leq d_S ( x , y ) \), and every such function arises this way. These definitions can be extended to similar situations. For example, letting \( \pre{ < \omega \times \omega }{ I } = \bigcup_{n} \pre{ n \times n }{ I } \), we say that \( \varphi \colon \pre{ < \omega \times \omega }{I} \to T \) is Lipschitz if \[
\FORALL{n} \FORALL{m < n} \FORALL{a \in \pre{ n \times n }{I} } \left ( \varphi ( a \mathpunct{\upharpoonright} m \times m ) \subset \varphi ( a ) \right ) . \] Such \( \varphi \) defines a continuous map from the space \( \pre{ \omega \times \omega }{I} \) (which is homeomorphic to \( \pre{ \omega }{I} \)) to \( \body{T} \).
\subsection{The Cantor and Baire spaces} The \markdef{Cantor space} \( \pre{\omega }{2} \) is the body of the complete binary tree \( \pre{ < \omega }{2} \). A subset of a separable metric space is a \markdef{Cantor set} if it is nonempty, compact, zero-dimensional, and perfect (i.e. without isolated points). By a theorem of Brouwer's~\cite[][Theorem 7.4]{Kechris:1995kc} every Cantor set is homeomorphic to \( \pre{\omega }{2} \), whence the name. The typical example of such set is \( E_{1/3} \), the closed, nowhere dense, null subset of \( [ 0 ; 1 ] \) usually known as \emph{Cantor's middle-third set}. See Section~\ref{sec:Cantorsets} for more examples of Cantor sets.
The \markdef{Baire space} \( \pre{\omega}{\omega} \) is the body of \( \pre{< \omega}{\omega} \). If \( T \) is pruned, then \( \body{T} \) is compact iff \( T \) is finitely branching, and therefore every compact subset of \( \pre{\omega}{\omega} \) has empty interior. The Baire set is homeomorphic to \( [ 0 ; 1 ] \setminus \mathbb{D} \), where \( \mathbb{D} = \setofLR{ k \cdot 2^{-n}}{ 0 \leq k \leq 2^n \wedge n \in \omega } \) is the set of dyadic numbers, via the map \begin{equation}\label{eq:homeomorphismBaire} G \colon \pre{\omega}{\omega} \to [ 0 ; 1 ] \setminus \mathbb{D} , \qquad \setLR{G ( x ) } = \bigcap_{n} I ( x \mathpunct{\upharpoonright} n ) \end{equation} where the \( I ( s ) \) (for \( s \in \pre{< \omega}{\omega} \)) are the closed intervals with endpoints in \( \mathbb{D} \) defined as follows: \( I ( \emptyset ) = [ 0 ; 1 ] \), and if \( I ( s ) = [ a ; b ] \), then \[ I ( s {}^\smallfrown k ) = \begin{cases} [ b - ( b - a ) 2^{- k} ; b - ( b - a ) 2^{- k - 1} ] & \text{if \( \lh s \) is odd,} \\ [ a + ( b - a ) 2^{- k - 1} ; a + ( b - a ) 2^{- k } ] & \text{if \( \lh s \) is even,} \end{cases} \] see~\cite[][Chapter VII, \S 3]{Levy:2002pt}. By Cantor's theorem \( \mathbb{D} \setminus \set{ 0 , 1 } \) is order isomorphic to any countable dense set \( D \subseteq \mathbb{R} \), and hence there is a homeomorphism \( ( 0 ; 1 ) \to \mathbb{R} \) that maps \( ( 0 ; 1 ) \setminus \mathbb{D} \) onto \( \mathbb{R} \setminus D \). In other words, \( \pre{\omega}{\omega} \) is homeomorphic to \( \mathbb{R} \setminus D \) where \( D \) is countable dense set; in particular, it is homeomorphic to the set of irrational numbers.
\subsection{Measures} A \markdef{measure space} \( ( X , \mathcal{S} , \mu ) \) consists of a \( \sigma \)-algebra \( \mathcal{S} \) on a nonempty set \( X \) and a \( \sigma \)-additive measure \( \mu \) with domain \( \mathcal{S} \). We always assume that \( \mu \) is \markdef{nonzero}, that is \( \mu ( X ) > 0 \). Given a measure space \( ( X , \mathcal{S} , \mu ) \) we say that \( \mu \) is \markdef{non-singular} \footnote{In the literature these measures are also called non-atomic or continuous, but in this paper the adjective \emph{continuous} is reserved for a different property (Definition~\ref{def:continuousmeasure}).} or \markdef{diffuse} if \( \mu ( \set{x} ) = 0 \) for all \( x \in X \), it is a \markdef{probability measure} if \( \mu ( X ) = 1 \), it is \markdef{finite} if \( \mu ( X ) < \infty \), it is \markdef{\( \sigma \)-finite} if \( X = \bigcup_{n} X_n \) with \( X_n \in \mathcal{S} \) and \( \mu ( X_n ) < \infty \). Following Carathéodory, \( \mathcal{S} \) can be extended to \( \MEAS_\mu \), the \( \sigma \)-algebra of \markdef{\( \mu \)-measurable sets}, and the measure can be uniquely extended to a measure (still denoted by \( \mu \)) on \( \MEAS_\mu \). A set \( N \in \MEAS_\mu \) is \markdef{null} if \( \mu ( N ) = 0 \), that is if there is \( A \in \mathcal{S} \) such that \( N \subseteq A \) and \( \mu ( A ) = 0 \). A set \( A \in \MEAS_\mu \) is \markdef{nontrivial} if \( A , A^\complement \notin \NULL_\mu \).
For \( A , B \in \MEAS_\mu \) set \( A \subseteq_\mu B \mathbin{\Leftrightarrow } \mu ( A \setminus B ) = 0 \), and \[
A =_\mu B \mathbin{\Leftrightarrow } A \subseteq_\mu B \wedge B \subseteq_\mu A \mathbin{\Leftrightarrow } \mu ( A \mathop{\triangle} B ) = 0 .
\] Taking the quotient of \( \MEAS_\mu \) by the ideal \( \NULL_\mu \) or equivalently by the equivalence relation \( =_\mu \), we obtain the \markdef{measure algebra} of \( \mu \) \[ \MALG ( X , \mu ) = \frac{\MEAS_\mu }{ \NULL_\mu} = \frac{ \mathcal{S}}{ \mathcal{S} \cap \NULL_\mu} , \] which is a boolean algebra. (Whenever possible we will drop the mention to \( X \) and/or \( \mu \) in the definition of measure algebra.) The measure \( \mu \) induces a function on the quotient \[ \hat{ \mu } \colon \MALG \to [ 0 ; + \infty ] , \quad \hat{ \mu } ( \eq{A} ) = \mu ( A ) . \] We often write \( \mu ( \eq{A} ) \) or \( \mu \eq{A} \) instead of \( \hat{ \mu } ( \eq{A} ) \). The set \( \MALG_\mu \) is endowed with the topology generated by the sets \( \mathcal{B} _{ \eq{A} , r } = \setof{ \eq{B} }{ \mu ( A \mathop{\triangle} B ) < r } \) for \( \eq{A} \in \MALG \) and \( r > 0 \). When \( \mu \) is finite, this topology is metrizable with the distance \( \delta ( \eq{A} , \eq{B} ) = \mu ( A \mathop{\triangle} B ) \), and therefore \( \mathcal{B} _{ \eq{A} , r } = \Ball ( \eq{A} ; r ) \).
A \markdef{Borel measure} on a topological space \( X \) is a measure \( \mu \) defined on \( \Bor ( X ) \), the collection of all Borel subsets of \( X \); we say that \( \mu \) is \markdef{fully supported} if \( \mu ( U ) > 0 \) for all nonempty open set \( U \). A Borel measure is \markdef{inner regular} if \( \mu ( A ) = \sup \setof{ \mu ( F )}{ F \subseteq A \wedge F \text{ is closed}} \); it is \markdef{outer regular} if \( \mu ( A ) = \inf \setof{\mu ( U )}{U \supseteq A \wedge U \text{ is open}} \). A finite Borel measure on a metric space is both inner and outer regular. A Borel measure is \markdef{locally finite} if every point has a neighborhood of finite measure; hence in a second countable space a locally finite measure is automatically \( \sigma \)-finite. A \markdef{Radon space} \( ( X , \mu ) \) is a Hausdorff topological space \( X \) with a locally finite Borel measure which is \markdef{tight}, that is \( \mu ( A ) = \sup \setofLR{\mu ( K )}{K \subseteq A \wedge K \in \KK ( X ) } \). A \markdef{metric measure space} \( ( X , d , \mu ) \) is a metric space endowed with a Borel measure; if the underlying topological space is Polish we will speak of \markdef{Polish measure space}. Every finite Borel measure on a Polish space is tight. In this paper, unless otherwise stated, we \emph{work in a fully supported, locally finite metric measure space}. The space \( \MALG_\mu \) is Polish when \( X \) is Polish and \( \mu \) is Borel and finite. If moreover \( \mu \) is a non-singular, probability measure on \( X \) then \( \MALG_\mu \) is isomorphic to the measure algebra constructed from the Lebesgue measure \( \lambda \) on \( [ 0 ; 1 ] \)~\cite[][Theorem 17.41]{Kechris:1995kc}.
If \( \mu \) is nonsingular, then \( \lim_{ \varepsilon {\downarrow} 0 } \mu ( \Ball ( x ; \varepsilon ) ) = 0 \), for all \( x \in X \). The next definition strengthens this fact.
\begin{definition}\label{def:continuousmeasure} Let \( ( X , d , \mu ) \) be fully supported, locally finite metric masure space. Then \( \mu \) is \begin{itemize}[leftmargin=1pc] \item \markdef{continuous} if for all \( x \in X \) the map \( \cointerval{0}{+\infty} \to [ 0 ; +\infty ] \), \( r \mapsto \mu ( \Ball ( x ; r ) ) \), is continuous, \item \markdef{uniform} if \( \mu ( \Ball ( x ; r ) ) = \mu ( \Ball ( y ; r ) ) \) for all \( x , y \in X \), i.e.~if the measure of an open ball depends only on its radius. \end{itemize} \end{definition}
The Lebesgue measure on \( \mathbb{R}^n \) is the typical example of a continuous and uniform measure. If a measure is continuous, then a much stronger form of continuity holds.
\begin{lemma}\label{lem:continuityofmeasure} If \( \mu \) is continuous, then the function \[ B \colon X \times \cointerval{0}{+\infty} \to \MALG , \quad ( x , r ) \mapsto \eq{\Ball ( x , r ) } \] is continuous. In particular the map \( X \times \cointerval{0}{+\infty} \to [ 0 ; + \infty ] \), \( ( x , r ) \mapsto \mu ( \Ball ( x , r ) ) \) is continuous. \end{lemma}
\begin{proof} Fix \( ( x , r ) \in X \times \cointerval{0}{+\infty} \), in order to prove continuity of \( B \) in \( ( x , r ) \). Fix also \( \varepsilon \in \cointerval{0}{+\infty} \). There is \( \delta \in \cointerval{0}{+\infty} \) such that \[ \forall r' \in \cointerval{0}{+\infty} \left ( \card{ r - r' } < \delta \Rightarrow \card{ \mu ( \Ball ( x ; r ) ) - \mu ( \Ball ( x ; r' ) ) } < \varepsilon \right ) . \] Let \( ( x' , r' ) \in X \times \cointerval{0}{+\infty} \) with \( d ( x , x' ) < \delta / 4 \) and \( \card{ r - r' } < \delta / 4 \). If \( r > \frac{\delta }2 \), then \[ \Ball ( x ; r - \frac { \delta }{2} ) \subseteq \Ball ( x ; r' - \textstyle \frac{ \delta}{4} ) \subseteq \Ball ( x' ; r' ) \subseteq \Ball ( x ; r' + \textstyle \frac{ \delta}{4} ) \subseteq \Ball ( x ; r + \textstyle \frac{ \delta}{2} ) , \] so \begin{align*} \mu ( \Ball ( x ; r ) \mathop{\triangle} \Ball ( x' ; r' ) ) & = \mu ( \Ball ( x ; r ) \setminus \Ball ( x' ; r' ) ) + \mu ( \Ball ( x' ; r' ) \setminus \Ball ( x ; r ) ) \\
& \leq \mu ( \Ball ( x ; r ) \setminus \Ball ( x ; r - \textstyle \frac{ \delta}{2} ) ) + \mu ( \Ball ( x ; r + \textstyle \frac{ \delta}{2} ) \setminus \Ball ( x ; r ) )
\\
& < 2 \varepsilon. \end{align*} On the other hand, if \( r \leq \frac{\delta }2 \), then \( \Ball ( x ' ; r ' ) \subseteq \Ball ( x ; r + \frac{ \delta }{2} ) \) as well, so \begin{equation*} \begin{split} \mu ( \Ball ( x ; r ) \mathop{\triangle} \Ball ( x' ; r' ) ) & = \mu ( \Ball ( x ; r ) \setminus \Ball ( x' ; r' ) ) + \mu ( \Ball ( x' ; r' ) \setminus \Ball ( x ; r ) ) \\
& \leq \mu ( \Ball ( x ; r ) ) + \mu ( \Ball ( x ; r + \textstyle \frac{ \delta}{2} ) \setminus \Ball ( x ; r ) )
\\
&< 2 \varepsilon. \qedhere \end{split} \end{equation*} \end{proof}
Using an argument as in Lemma~\ref{lem:continuityofmeasure} one can prove
\begin{lemma}\label{lem:uniformcontinuityofmeasure} The function \( B \) from Lemma~\ref{lem:continuityofmeasure} is uniformly continuous if \[
\FORALL{ \varepsilon > 0 } \EXISTS{ \delta > 0 } \FORALL{x \in X} \FORALL{r , r' \geq 0 } \bigl ( \card{ r - r' } < \delta \Rightarrow \card{ \mu ( \Ball ( x ; r ) ) - \mu ( \Ball ( x ; r' ) ) } < \varepsilon \bigr ) . \] \end{lemma}
\subsubsection{Measures on the Cantor and Baire spaces}\label{subsubsec:measureonCantor} A zero-dimensional Polish space can be identified, up to homeomorphism, with a closed subset of \( \pre{\omega}{\omega} \). Let \( T \) be a pruned tree on \( \omega \); a locally finite Borel measure \( \mu \) on \( \body{T} \subseteq \pre{\omega}{\omega} \) is completely described by its values on the basic open sets \( {\boldsymbol N}\!_s \) with \( s \in T \), so it can be identified with a map \[ w \colon T \to [ 0 ; M ] \] where \( M = \mu ( \body{T} ) \leq + \infty \), and such that \( w ( \emptyset ) = M \), \( T_\infty = \setof{ t \in T}{ w ( t ) = \infty } \) is a well-founded (possibly empty) tree, and for all \( t \in T \setminus T_ \infty \) \[ w ( t ) = \sum_{ t {}^\smallfrown i \in T , i \in \omega } w ( t {}^\smallfrown i ) . \] Thus if the measure is finite then \( T_\infty = \emptyset \). If we require the measure to be fully supported, just replace in the definition above \( [ 0 ; M ] \) with \( \ocinterval{0}{ M } \). The measure is non-singular just in case \[
\lim_{n \to \infty} w ( x \mathpunct{\upharpoonright} n ) = 0 . \] The \markdef{Lebesgue measure} \( \mu^{\mathrm{C}} \) on \( \pre{\omega }{2} \) is determined by \( w \colon \pre{ < \omega }{2} \to \ocinterval{0}{1} \), \( w ( s ) = 2^{- \lh s } \); it is also known as the \markdef{Bernoulli} or \markdef{coin tossing measure}. The \markdef{Lebesgue measure} \( \mu^{\mathrm{B}} \) on \( \pre{\omega}{\omega} \) is determined by \( w \colon \pre{< \omega}{\omega} \to \ocinterval{0}{1} \), \( w ( s ) = \prod_{i < \lh ( s )} 2^{ - s ( i ) - 1} \). Both \( \mu^{\mathrm{C}} \) and \( \mu^{\mathrm{B}} \) are non-singular, and neither is continuous, as the next result shows. The reasons for tagging \( \mu^{\mathrm{C}} \) and \( \mu^{\mathrm{B}} \) with the name ``Lebesgue'' is that they are induced by the Lebesgue measure on \( \mathbb{R} \) via suitable embeddings---for \( \mu^{\mathrm{B}} \) apply \( G \colon \pre{\omega}{\omega} \to [ 0 ; 1] \) of~\eqref{eq:homeomorphismBaire}, and for \( \mu^{\mathrm{C}} \) see Example~\ref{xmp:Cntorofmeasure2}.
\begin{proposition}\label{prop:measuresonBairenotcontinuous} Let \( T \) be a pruned tree on \( \omega \), and let \( \mu \) be a locally finite, non-singular, fully supported Borel measure on \( \body{T} \). Then for each \( x \in \body{T} \) the set of discontinuity points of \( r \mapsto \mu ( \Ball ( x ; r ) ) \) accumulates to \( 0 \). \end{proposition}
\begin{proof} Let \( w \colon T \to [ 0 , \infty ] \) be the map inducing \( \mu \). As \( \mu \) is fully supported and non-singular, then \( \body{T} \) has no isolated points and \( \FORALL{ s \in T } \EXISTS{ t \in T} ( s \subset t \wedge w ( s ) > w ( t ) ) \). Thus for each \( x \in \body{T} \) and each \( n \) such that \( w ( x \mathpunct{\upharpoonright} n ) < + \infty \) and \( x \mathpunct{\upharpoonright} n \) has more than one immediate successor in \( T \), \[ w ( x \mathpunct{\upharpoonright} n ) = \lim_ {\varepsilon \downarrow 2^{-n} } \mu ( \Ball ( x ; \varepsilon ) ) > \mu ( \Ball ( x ; 2^{-n} ) ) = w ( x \mathpunct{\upharpoonright} n + 1 ) . \qedhere \] \end{proof}
In particular, Proposition~\ref{prop:measuresonBairenotcontinuous} applies to \( \mu^{\mathrm{C}} \) and \( \mu^{\mathrm{B}} \).
\section{Cantor sets}\label{sec:Cantorsets} \subsection{Cantor-schemes}\label{subsec:Cantorschemes} A \markdef{Cantor-scheme} in a metric space \( ( X , d ) \) is a system \( \seqofLR{U_s }{ s \in \pre{ < \omega }{2} } \) of nonempty open subsets of \( X \) such that \begin{itemize} \item \( \Cl ( U_{s {}^\smallfrown i} ) \subseteq U_s \), for all \( s \in \pre{ < \omega }{2} \) and \( i \in \set{ 0 , 1} \), \item \( \Cl ( U_{s {}^\smallfrown 0 } ) \cap \Cl ( U_{s {}^\smallfrown 1 } ) = \emptyset \). \end{itemize} If it also satisfies \begin{itemize} \item \( \lim_{n \to \infty} \diam ( U_{ z \mathpunct{\upharpoonright} n } ) = 0 \), for all \( z \in \pre{\omega }{2} \), \end{itemize} we say that it has \markdef{shrinking diameter}. A Cantor-scheme of shrinking diameter in a complete metric space yields a continuous injective \( F \colon \pre{\omega }{2} \to X \) \begin{equation}\label{eq:homeoCantorscheme} F ( z ) = \text{the unique point in } \bigcap_{n} \Cl ( U_{ z \mathpunct{\upharpoonright} n} ) . \end{equation} Thus \( \ran F \) is a Cantor subset of \( X \). Conversely, if \( F \colon \pre{\omega }{2} \to K \subseteq X \) witnesses that \( K \) is a Cantor set, then there is a Cantor-scheme of shrinking diameter that yields \( K \): let \( U_\emptyset = X \), and for each \( s \in \pre{ < \omega }{2} \) let \( K_s = F ( {\boldsymbol N}\!_s ) \) and let \( U_{s {}^\smallfrown i} = \Ball ( K_{s {}^\smallfrown i} ; r_s / 3) \) where \( r_s = d ( K_{ s {}^\smallfrown 0} , K_{s {}^\smallfrown 1 } ) \).
\begin{example}\label{xmp:Cntorofmeasure2} Fix \( \varepsilon _n > 0 \) such that \( \sum_{n = 0}^\infty 2^n \varepsilon _n = 1 \), and consider \( \seqof{ U_s }{ s \in \pre{ < \omega }{2} } \), the Cantor-scheme on \( \mathbb{R} \) defined as follows: each \( U_s \) is an open interval \( ( a_s ; b_s ) \) with \( a_\emptyset = 0 \), \( b_\emptyset = 2 \), and \[
a_{s {}^\smallfrown 0 } = a_s , \quad b_{s {}^\smallfrown 0 } = ( a_s + b_s - \varepsilon _{\lh s} ) / 2 , \quad a_{s {}^\smallfrown 1 } = ( a_s + b_s + \varepsilon _{\lh s} ) / 2, \quad b_{ s {}^\smallfrown 1 } = b_s . \] In other words, \( U_{s {}^\smallfrown 0 } \) and \( U_{s {}^\smallfrown 1 } \) are obtained by removing from \( U_s \) a closed centered interval of length \( \varepsilon _{\lh s} \). This scheme has shrinking diameter, so we obtain a Cantor set \( K \subseteq [ 0 ; 2 ] \). Note that for this Cantor scheme the function \( F \), defined as in~\eqref{eq:homeoCantorscheme}, is measure preserving between \( \pre{\omega }{2} \) with \( \mu^{\mathrm{C}} \) and \( K \) with the induced Lebesgue measure \( \lambda \). \end{example}
Cantor-schemes on \( \mathbb{R} \) can be generalized by using ternary sequences instead of binary ones. Let \( \seqof{ K_s , I_s^- , I_s^+ }{ s \in \pre{ < \omega}{ \set{ -1 , 0 , 1 } } } \) be such that \( K_s = [ a_s ; b_s ] \) and \( I_s^- = ( c_s^- ; d_s^- ) \), \( I_s^+ = ( c_s^+ ; d_s^+ ) \), with \( a_s < c_s^- < d_s^- < c_s^+ < d_s^+ < b_s \) and \( K_{s {}^\smallfrown \seq{-1}} = [ a_s ; c_s^- ] \), \( K_{s {}^\smallfrown \seq{0}} = [ d_s^- ; c_s^+ ] \), and \( K_{s {}^\smallfrown \seq{1}} = [ d_s^+ ; b_s ] \). In other words, the intervals \( K_{ s {}^\smallfrown \seq{ i } } \) with \( i \in \set{ -1 , 0 , 1 } \) are obtained by removing from \( K_s \) two open intervals \( I_s^- \) and \( I_s^+ \). Let \[
K^{( n )} = \bigcup_{s \in \pre{ n }{ \set{ -1 , 0 , 1 } } } K_s \text{ and } K = \bigcap_{n \in \omega } K^{( n )} . \] We dub this a \markdef{triadic Cantor-construction}. Note that \( K^{( n )} \) is the disjoint union of the closed intervals \( K_s \) for \( s \in \pre{ n }{ \set{ -1 , 0 , 1 } } \); in other words these \( K_s \) are the connected components of \( K^{( n )} \). We say that this construction has shrinking diameter if \( \lim_{n \to \infty }\card{K_{ z \mathpunct{\upharpoonright} n }} = 0 \) for all \( z \in \pre{ \omega}{ \set{ -1 , 0 , 1 } } \), and in this case we have a homeomorphism just like in~\eqref{eq:homeoCantorscheme}, that is \( F \colon \pre{ \omega}{ \set{ -1 , 0 , 1 } } \to K \), \( F ( z ) ={} \)the unique element of \( \bigcap_{n \in \omega } K_{ z \mathpunct{\upharpoonright} n } \). Since \( \pre{ \omega }{ 2 } \) and \( \pre{ \omega }{ 3 } \) are homeomorphic, this is just a Cantor-construction in disguise.
If the triadic Cantor-construction is of \emph{non-shrinking} diameter, a map like in~\eqref{eq:homeoCantorscheme} is undefined, and the map \( \pre{ \omega}{ \set{ -1 , 0 , 1 } } \to \KK ( \mathbb{R} ) \), \( z \mapsto \bigcap_{n \in \omega } K_{ z \mathpunct{\upharpoonright} n } \) is not continuous. On the other hand, regardless whether the Cantor-construction is of shrinking diameter, there is a continuous surjection \begin{equation}\label{eq:continuouscodingspongy} G \colon K \twoheadrightarrow \pre{ \omega}{ \set{ -1 , 0 , 1 } } , \quad K \ni x \mapsto G ( x ) \colon \omega \to \set{ -1 , 0 , 1 } \end{equation} defined as follows: if \( K_s \) is the connected component of \( K^{( n )} \) to which \( x \) belongs, \[ G ( x ) ( n ) = i \mathbin{\, \Leftrightarrow \,} x \in K_{ s {}^\smallfrown \seq{ i }} . \] Note that the connected components of \( K \) are the \( \bigcap_{n} K_{ z \mathpunct{\upharpoonright} n} \), for \( z \in \pre{ \omega}{ \set{ -1 , 0 , 1 } } \). In Section~\ref{subsec:spongy} we define a spongy subset of \( \mathbb{R} \) via a triadic Cantor-construction of non-shrinking diameter.
\subsection{Embedding the Cantor set in a measure preserving way} A basic result in Descriptive Set Theory states that an uncountable Polish space contains a Cantor set. The next result shows that the embedding can be taken to be measure-preserving.
\begin{theorem}\label{thm:embeddingCantorinPolish} Suppose \( \mu \) and \( \nu \) are nonsingular Borel measures on a Polish space \( ( X , d ) \) and on the Cantor set \( \pre{\omega }{2} \), respectively. Suppose also \( \nu \) is fully supported, and that \[
\exists Y \in \Bor ( X ) \left ( \nu ( \pre{\omega }{2} ) < \mu ( Y ) < \infty \right ) . \tag{\( * \)} \] Then there is a continuous injective \( H \colon \pre{\omega }{2} \to X \) that preserves the measure. \end{theorem}
The assumption (\( * \)) holds when \( \mu \) is \( \sigma \)-finite and \( \nu ( \pre{\omega}{2} ) < \mu ( X ) \). The proof of Theorem~\ref{thm:embeddingCantorinPolish} is based on a simple combinatorial fact, which can be formulated as follows: if we have empty barrels of capacity \( b_1, \dots , b_n \) and sufficiently small amphoræ of capacity \( a_1, \dots , a_m \) so that \( a_1 + \dots + a_m < b_1 + \dots + b_n \), it is possible to pour the wine of the amphoræ into the barrels so that the content of each amphora is poured into a single barrel.
\begin{lemma}\label{lem:embeddingCantorinPolish} Let \( 0 < a < b \) and \( 0 < A < B \) be real numbers. \begin{subequations} \begin{enumerate-(a)} \item\label{lem:embeddingCantorinPolish-a} For all \( b_1 , \dots , b_n > 0 \) such that \( b = b_1 + \cdots + b_n \) there is an \( r > 0 \) with the following property: for all \( 0 < a_1 , \dots , a_m \leq r \) such that \( a = a_1 + \cdots + a_m \), there are pairwise disjoint (possibly empty) sets \( I_1 \cup \dots \cup I_n = \setLR{1 , \dots , m} \) such that for all \( k = 1 , \dots , n \) \begin{equation}\label{eq:lem:embeddingCantorinPolish-a}
\sum_{i \in I_k} a_i < b_k . \end{equation} \item\label{lem:embeddingCantorinPolish-b} For all \( A_1 , \dots , A_N > 0 \) such that \( A = A_1 + \cdots + A_N \) there is an \( R > 0 \) with the following property: for all \( 0 < B_1 , \dots , B_M \leq R \) such that \( B_1 + \cdots + B_M = B \), there are pairwise disjoint nonempty sets \( J_1 \cup \dots \cup J_N = \setLR{1 , \dots , M} \) such that for all \( k = 1 , \dots , N \) \begin{equation}\label{eq:lem:embeddingCantorinPolish-b} A_k < \sum_{j \in J_k} B_j . \end{equation} Moreover the \( J_k \)s can be taken to be consecutive intervals, that is there are natural numbers \( j_0 = 0 < j_1 < \dots < j_N = M \) such that \( J_k = \setLR{ j_{k - 1} + 1 , \dots , j_k } \). \end{enumerate-(a)} \end{subequations} \end{lemma}
\begin{proof} \ref{lem:embeddingCantorinPolish-a} Given \( b_1 , \dots , b_n \), let \( r = ( b - a ) / n \). Suppose we are given \( 0 < a_1 , \dots , a_m \leq r \). By induction on \( k \), construct pairwise disjoint sets \( I_k \subseteq \setLR{1 , \dots , m} \) that are maximal with respect to~\eqref{eq:lem:embeddingCantorinPolish-a}, and let \( I = I_1 \cup \dots \cup I_n \). If \( I \neq \setLR{1 , \dots , m} \), then by maximality of \( I_k \), \[ b_k \leq \frac{b - a}{n} + \sum_{ i \in I_k} a_i , \] so we would have \[ b = \sum_{k = 1}^n b_k \leq ( b - a ) + \sum_{i \in I} a_i < ( b - a ) + \sum_{i = 1}^m a_i = b - a + a = b , \] a contradiction.
\ref{lem:embeddingCantorinPolish-b} The proof is similar to the one of~\ref{lem:embeddingCantorinPolish-a}. If \( N = 1 \) there is nothing to prove, so we may assume otherwise. Given \( A_1 , \dots , A_N \), let \( R = ( B - A ) / ( N - 1 ) \). Suppose we are given \( 0 < B_1 , \dots , B_M \leq R \). By induction on \( k \), we shall construct \( j_0 = 0 < j_1 < \dots < j_N = M \) such that each \( J_k = \setLR{ j_{k - 1} + 1 , \dots , j_k } \) satisfies~\eqref{eq:lem:embeddingCantorinPolish-b}, and it is least such, except possibly the last one \( j_N \). The definition of \( j_1 \) is clear: it is the least \( j \leq M \) such that \( \sum_{h = 0}^{j} B_h > A_1 \), and such number exists since \( A_1 < A < B \). We must show that the other \( j_k \)s exist, i.e. that the construction does not break-down before step \( N \). Towards a contradiction, suppose \( 1 \leq \bar{N} < N \) is least such that \( j_{\bar{N} + 1 } \) is not defined. By construction \( A_k + R > \sum_{i \in J_k} B_i \) for all \( k \leq \bar{N} \), and therefore \[ \sum_{k = 1}^{\bar{N}} A_k > \sum_{ i = 1}^{j_{\bar{N}}} B_i - \bar{N} R , \] and by case assumption \( A_{\bar{N} + 1} > \sum_{ i = j_{\bar{N}} + 1 }^M B_i \), if \( j_{\bar{N}} < M \). Then \[
A \geq \sum_{k = 1}^{\bar{N} + 1 } A_k > \sum_{i = 1}^M B_i - \bar{N} R \geq B - ( N - 1 ) R = A , \] a contradiction. \end{proof}
We now turn to the proof of Theorem~\ref{thm:embeddingCantorinPolish}. The Cantor scheme construction with shrinking diameters guarantees that there is a continuous embedding \( f \colon \pre{\omega}{2} \to X \), but the map \( f \) need not be measure preserving---in fact it can happen that \( f ( \pre{\omega}{2} ) \) is \( \mu \)-null. Of course we could modify the Cantor scheme by using Borel subsets of \( X \) of appropriate measure, but then we would have no control on the diameters of these Borel sets. The cure is to carefully mix these two approaches, so that the construction succeeds.
\begin{proof}[Proof of Theorem~\ref{thm:embeddingCantorinPolish}] We claim it is enough to prove the result when \( \nu ( \pre{\omega}{2} ) < \mu ( X ) < + \infty \). In fact if \( Y \in \Bor ( X ) \) and \( \nu ( \pre{\omega}{2} ) < \mu ( Y ) < + \infty \) then there is a finer topology \( \tau \) on \( X \) so that \( Y \) with the topology induced by \( \tau \) is Polish~\cite[][Theorem 13.1]{Kechris:1995kc}, so that any continuous injective measure preserving map \( H \colon \pre{\omega }{2} \to ( Y , \tau ) \) is also continuous as a function \( H \colon \pre{\omega }{2} \to X \) when \( X \) is endowed with the original topology. Therefore we may assume that \[ \nu ( \pre{\omega}{2} ) < \mu ( X ) < + \infty . \] By a result of Lusin and Souslin~\cite[][Theorem 13.7]{Kechris:1995kc}, \( X \) is the continuous injective image of a closed subset of the Baire space, so we may fix a pruned tree \( T \) on \( \omega \) and a continuous bijection \( f \colon \body{T} \to X \). To avoid ambiguity we write \begin{align*} \tilde{{\boldsymbol N}\!}_t & = \setofLR{ z \in \body{T}}{ t \subseteq z } , & {\boldsymbol N}\!_s & = \setofLR{ z \in \pre{\omega }{2} }{ s \subseteq z } \end{align*} to denote the basic open neighborhood of \( \body{T} \) and of \( \pre{\omega }{2} \) determined by \( t \in T \) and \( s \in \pre{ < \omega }{2} \). The measure \( \mu \) together with \( f \) induces a measure \( \mu' \) on \( \body{T} \) defined by \[ \mu' ( \tilde{{\boldsymbol N}\!}_t ) = \mu ( f ( \tilde{{\boldsymbol N}\!}_t ) ) , \] and by tightness, there is a pruned, finite branching \( T' \subseteq T \) such that \( \nu ( \pre{\omega }{2} ) < \mu' ( \body{T'} ) \). Without loss of generality we may assume \( T' \) is \markdef{normal}, that is the set of successors of \( t \in T' \) is \( \setofLR{t {}^\smallfrown \seq{ i }}{ i < n } \) for some \( n \in \omega \). Therefore, it is enough to show that there is an injective, continuous \( g \colon \pre{\omega }{2} \to \body{T'} \), such that \( \nu ( {\boldsymbol N}\!_s ) = \mu' ( g ( {\boldsymbol N}\!_s ) ) \), for all \( s \in \pre{ < \omega }{2} \) since then \( f \circ g \colon \pre{\omega }{2} \to X \) would be injective, continuous, and it follows that \( f \circ g \) is measure-preserving, as required. Therefore, it all boils-down to prove that: \begin{quote} If \( T \) is a pruned, normal, finitely branching tree on \( \omega \), and \( u \colon \pre{ < \omega }{2} \to ( 0 ; + \infty ) \) and \( w \colon T \to ( 0 ; + \infty ) \) induce fully supported, nonsingular, Borel measures \( \nu \) on \( \pre{\omega }{2} \) and \( \mu \) on \( \body{T} \), respectively, such that \( u ( \emptyset ) = \nu ( \pre{\omega }{2} ) < \mu ( \body{T} ) = w ( \emptyset ) \), then there is a continuous \( \varphi \colon \pre{ < \omega }{2} \to T \) such that the induced function \( f_ \varphi \colon \pre{\omega }{2} \to \body{T} \) is injective and \( \nu \left ( {\boldsymbol N}\!_s \right ) = \mu \left ( f_ \varphi ( {\boldsymbol N}\!_s ) \right ) \). \end{quote}
Suppose we are given \( T \), \( u \) and \( w \) as above. The function \( \varphi \colon \pre{ < \omega }{2} \to T \) is first defined on \( \bigcup_{k \in \omega} \pre{L_k}{2} \) for some suitable increasing sequence \( ( L_k )_k \), and then extended to all of \( \pre{ < \omega }{2} \) by requiring that when \( L_k < \lh s < L_{k + 1} \), then \( \varphi ( s ) = \varphi ( s \mathpunct{\upharpoonright} L_k ) \).
We require that \[ s \in \pre{L_k}{2} \mathbin{\, \Rightarrow \,} \varphi ( s ) \in \Lev_{M_k} ( T ) \equalsdef \setof{ t \in T }{ \lh ( t ) = M_k } , \] where \( ( M_k )_k \) is a suitable increasing sequence. The function \( f_ \varphi \) will be injective, but the same need not be true of the map \( \varphi \): even if \( x \mathpunct{\upharpoonright} L_k \neq y \mathpunct{\upharpoonright} L_k \) one might need to reach a much larger \( L_n \) in order to witness \( \varphi ( x \mathpunct{\upharpoonright} L_n ) \neq \varphi ( y \mathpunct{\upharpoonright} L_n ) \) and hence \( f_ \varphi ( x ) \neq f_ \varphi ( y ) \). For each \( t \in \varphi ( \pre{ L_k }{ 2 } ) = \setofLR { \varphi ( s ) \in \pre{ M_k }{ 2 } }{ s \in \pre{L_k}{2} } \) let \[ \mathcal{A}_k ( t ) = \setof{ s \in \pre{L_k}{2} }{ \varphi ( s ) = t } \] so that \( \setofLR{\mathcal{A}_k ( t ) }{ t \in \varphi ( \pre{ L_k }{ 2 } ) } \) is the partition of \( \pre{ L_k }{ 2 } \) given by the fibers of \( \varphi \).
Set \( \varphi ( \emptyset ) = \emptyset \), \( L_0 = M_0 = 0 \) and let \( \delta_0 \) be a positive real such that \( u ( \emptyset ) < w ( \emptyset ) < u ( \emptyset ) + \delta_0 \).
Fix \( k \in \omega \) and suppose that \( L_k \), \( M_k \), and \( \delta _k \) have been defined, together with the values \( \varphi ( s ) \) for all \( s \in \pre{L_k}{2} \), and suppose that for every \( t \in \varphi ( \pre{L_k}{2} ) \), \begin{equation}\label{eq:th:embeddingCantorinPolish} \sum_{s \in \mathcal{A}_k ( t ) } u ( s ) < w ( t ) < \delta_k + \sum_{s \in \mathcal{A}_k ( t ) } u ( s ) . \end{equation} The goal is to define \( L_{k + 1} \), \( M_{k + 1} \), \( \delta _{k + 1} \) and the values \( \varphi ( s ) \in \Lev_{M_{k + 1}} ( T ) \) for \( s \in \pre{L_{k + 1}}{2} \). Let \( \delta _{ k + 1} = 2^{- 2 L_k } \). (The actual values of the \( \delta _j \)s are only used in Claim~\ref{cl:embeddingCantorinPolish4} to certify that \( f_ \varphi \) is measure preserving, and play no significant role in the construction of \( \varphi \).)
\begin{claim}\label{cl:embeddingCantorinPolish1} Let \( R > 0 \). Then there is \( M \) such that \( w ( t ) < R \) for all \( t \in \Lev _{M} ( T ) \). Moreover, this relation implies that \( \FORALL{ M' > M } \FORALL{ t \in \Lev _{M'} ( T ) } ( w ( t ) < R ) \).
Similarly, \( \EXISTS{ M } \FORALL{ s \in \pre{M}{2}} u ( s ) < R \), thus \( \FORALL{ M' > M } \FORALL{ s \in \pre{M'}{2} } ( u ( s ) < R ) \). \end{claim}
\begin{proof} Otherwise, the tree \( \setofLR{ t \in T}{w ( t ) \geq R} \) would be infinite. Since it is finitely branching, it would be ill-founded, contradicting non-singularity of \( \mu \). \end{proof}
Fix a \( t \in \varphi ( \pre{L_k}{2} ) \). Applying Lemma~\ref{lem:embeddingCantorinPolish}\ref{lem:embeddingCantorinPolish-b} to the numbers \begin{align*} A_{ s {}^\smallfrown i } &= u ( s {}^\smallfrown i ) , & ( \text{for } ( s , i ) \in \mathcal{A}_k ( t ) \times 2 ) \\ A & = \textstyle \sum_{ s \in \mathcal{A}_k ( t ) } u ( s ) , \\
B & = w ( t ) , \end{align*} a value \( R_t \) is obtained such that whenever \( B_1 , \ldots , B_M \leq R_t \) and \( B_1 + \ldots + B_M = B \), there exists a partition of \( \set{ 1 , \ldots , M} \) into sets \( J_{ s {}^\smallfrown i } \) such that \( u ( s {}^\smallfrown i ) < \sum_{ h \in J_{ s {}^\smallfrown i } } B_h \). Let \( R = \min \setofLR{\delta_{k + 1} , R_t }{ t \in \varphi ( \pre{L_k}{2} ) } \). Applying Claim~\ref{cl:embeddingCantorinPolish1}, let \( M_{k + 1} > M_k \) be such that \( w ( t' ) < R \) for all \( t' \in \Lev _{M_{k + 1}} ( T ) \). Let \[ D_t = \setof{ t' \in \Lev _{M_{k + 1}} ( T ) }{t' \mathpunct{\upharpoonright} M_k = t} . \] It follows that \( B_{t'} = w ( t' ) < R \) for \( t' \in D_t \), and \( B = \sum_{ t' \in D_t } B_{t'} \) so there is a partition \( \setof{J_{ s {}^\smallfrown i } }{ ( s , i ) \in \mathcal{A}_k ( t ) \times 2 } \) of \( D_t \) such that \begin{equation}\label{eq:th:embeddingCantorinPolish1} u ( s {}^\smallfrown i ) < \sum_{t' \in J_{ s {}^\smallfrown i }} w ( t' ) . \end{equation} Choose \[ C_{ s {}^\smallfrown i } \subseteq J_{ s {}^\smallfrown i } \] minimal so that \( u ( s {}^\smallfrown i ) < \sum_{t' \in C_{ s {}^\smallfrown i }} w ( t' ) \). By the choice of \( R \) one also has \begin{equation}\label{eq:th:embeddingCantorinPolish2}
\sum_{t' \in C_{ s {}^\smallfrown i }} w ( t' ) < u ( s {}^\smallfrown i ) + \delta_{k + 1} . \end{equation}
Now, for each \( ( s , i ) \in \mathcal{A}_k ( t ) \times 2 \), apply Lemma~\ref{lem:embeddingCantorinPolish}\ref{lem:embeddingCantorinPolish-a} to the numbers \begin{align*} b_{t'} & = w ( t' ) , & ( \text{for } t' \in C_{ s {}^\smallfrown i } ) \\ b & = \textstyle \sum_{t' \in C_{ s {}^\smallfrown i } } w ( t' ) \\ a & = u ( s {}^\smallfrown i ) \end{align*} to get a value \( r_{ s {}^\smallfrown i } \) such that whenever \( 0 < a_1 , \dots , a_m \leq r_{ s {}^\smallfrown i } \) and \( a_1 + \ldots + a_m = a \), there are pairwise disjoint, possibly empty, subsets \( I_{t' } \) of \( \set{ 1 , \ldots , m} \) such that \( \bigcup_{t' \in C_{ s {}^\smallfrown i }} I_{t' } = \set{ 1 , \ldots , m} \) and \( \sum_{h \in I_{t' }} a_h < w ( t' ) \). Let \( r \) be the least of all \( r_{ s {}^\smallfrown i } \). By Claim~\ref{cl:embeddingCantorinPolish1}, there is \( L_{k + 1} > L_k \) such that \( u ( s ) < r \) for all \( s \in \pre{L_{k + 1}}{2} \). Set \( E_{ s {}^\smallfrown i } = \setofLR{ s' \in \pre{L_{k + 1}}{2}}{s {}^\smallfrown i \subseteq s' } \), so that \( \sum_{s' \in E_{s {}^\smallfrown i}} u ( s' ) = u ( s {}^\smallfrown i ) \). By Lemma~\ref{lem:embeddingCantorinPolish}\ref{lem:embeddingCantorinPolish-a}, \( E_{ s {}^\smallfrown i } \) is partitioned into sets \( I_{t' } \), for \( t' \in C_{ s {}^\smallfrown i } \), such that \( \sum_{ s' \in I_{t' }} u ( s' ) < w ( t' ) \). By~\eqref{eq:th:embeddingCantorinPolish2} we have \( w ( t' ) < \delta_{k + 1} + \sum_{ s' \in I_{t' }} u ( s' ) \). Let \( \varphi ( s' ) = t' \) for \( s' \in I_{t' } \), so that \( \varphi \) is defined on \( \pre{ L_{ k + 1}}{2} \). This concludes the definition of \( \varphi \colon \pre{ < \omega }{2} \to T \). Note that by construction \( \mathcal{A}_{ k + 1 } ( t' ) = I_{t'} \), so~\eqref{eq:th:embeddingCantorinPolish} holds for \( k + 1 \).
\begin{claim}\label{cl:embeddingCantorinPolish2} The function \( \varphi \colon \pre{ < \omega }{2} \to T \) is continuous. \end{claim}
\begin{proof} First notice that \( \varphi \) is monotone, directly from the definition. Moreover, \( \lim_{k \to \infty } \lh \varphi ( x \mathpunct{\upharpoonright} L_k ) = \lim_{k \to \infty }M_k = + \infty \), since the sequence \( M_k \) is increasing. \end{proof}
\begin{claim}\label{cl:embeddingCantorinPolish3} \( f_{\varphi } \colon \pre{\omega }{2} \to \body{T} \) is injective. \end{claim}
\begin{proof} Let \( x , y \) be distinct elements of \( \pre{\omega }2 \), and let \( k \in \omega \) such that \( x \mathpunct{\upharpoonright} L_k\neq y \mathpunct{\upharpoonright} L_k \). Since \( \varphi ( x \mathpunct{\upharpoonright} L_{k + 1} ) \in C_{x \mathpunct{\upharpoonright} ( L_k + 1 ) } \), \( \varphi ( y \mathpunct{\upharpoonright} L_{ k + 1 } ) \in C_{y \mathpunct{\upharpoonright} ( L_k + 1 ) } \), and \( C_{x \mathpunct{\upharpoonright} ( L_k + 1 ) }\cap C_{y \mathpunct{\upharpoonright} ( L_k + 1 ) } = \emptyset \), it follows that \( \varphi ( x \mathpunct{\upharpoonright} L_{k + 1} ) \neq\varphi ( y \mathpunct{\upharpoonright} L_{k + 1} ) \), whence \( f_{\varphi } ( x ) \neq f_{\varphi } ( y ) \). \end{proof}
\begin{claim}\label{cl:embeddingCantorinPolish4} \( \FORALL{ s \in \pre{ < \omega }{2} } [ \nu ( {\boldsymbol N}\!_s ) = \mu \left ( f_{\varphi } ( {\boldsymbol N}\!_s ) \right ) ] \). \end{claim}
\begin{proof} It is enough to establish the claim for \( s \in \pre{ L_k }{ 2 } \), for some \( k > 0 \). For \( h \geq k \) let \( X ( h , s ) = \bigcup \setof{C_{ s' {}^\smallfrown i }}{s' \supseteq s \wedge \lh ( s' ) = L_h \wedge i \in 2 } \).
First remark that \begin{equation}\label{eq:th:embeddingCantorinPolish5} f_{\varphi } ( {\boldsymbol N}\!_s ) = \bigcap_{h \geq k + 1} \bigcup_{p \in X ( h , s ) } {\boldsymbol N}\!_p . \end{equation} To prove that left-hand side is contained in the right-hand side argue as follows. Given \( x \supseteq s \), for \( h \geq k + 1 \) choose \( s \subseteq s' \in \pre{L_h}{2} \), \( i \in 2 \), and \( s'' \in \pre{< \omega}{2} \) such that \( \lh ( s'' ) = L_{h + 1} - L_h - 1 \) and \( s' {}^\smallfrown \seq{ i } {}^\smallfrown s'' \subseteq x \); then \( \varphi ( s' {}^\smallfrown \seq{ i } {}^\smallfrown s'' ) \in C_{s' {}^\smallfrown i} \). Conversely, pick \( y \) in the right-hand side of the equation: for every \( h \geq k + 1 \) there are \( s_h \in \pre{L_h}{2} \), \( i_h \in 2 \), \( p_h \in C_{s_h {}^\smallfrown i_h} \) such that \( s \subseteq s_h \) and \( p_h \subseteq y \), and since all \( p_h \) are compatible, all \( s_h \) must be compatible as well by construction, so their union is an element \( x \in {\boldsymbol N}\!_s \) such that \( f_{\varphi } ( x ) = y \).
Equation~\eqref{eq:th:embeddingCantorinPolish5} yields \( f_{ \varphi } \) as a decreasing intersection of disjoint unions, so \[ \mu \left ( f_{\varphi } ( {\boldsymbol N}\!_s ) \right ) = \inf_{ h \geq k + 1} \sum_{ p \in X ( h , s ) } w ( p ) . \]
Now, for any given \( h \geq k + 1 \), letting \( Y ( h ; s ) = \setofLR{ s' {}^\smallfrown i }{ s \subseteq s' \in \pre{L_h}{2}, i \in 2 } \), \[ \begin{split} \nu ( {\boldsymbol N}\!_s ) & = \sum_{s'' \in Y ( h ; s ) } u ( s'' ) \\ & < \sum_{p \in C_{s''} , s'' \in Y ( h ; s ) } w ( p ) \\ & = \sum_{ p \in X ( h , s ) } w ( p ) \\
& < \sum_{s'' \in Y ( h ; s ) } ( u ( s'' ) + \delta_{h + 1} )
\\
&= \sum_{ s'' \in Y ( h ; s ) } u ( s'' ) + 2^{L_h + 1} \delta_{h + 1} . \end{split} \] As \( \lim_{h \to \infty} \sum_{s'' \in Y ( h ; s ) } u ( s'' ) + 2^{L_h + 1}\delta_{h + 1} = \nu ( {\boldsymbol N}\!_s ) \) the claim is proved. \end{proof} This completes the proof of Theorem~\ref{thm:embeddingCantorinPolish}. \end{proof}
\section{The density function}\label{sec:densityfunction} Let \( ( X , d , \mu ) \) be a fully supported, locally finite metric measure space and let \( A \in \MEAS_\mu \). For \( x \in X \), the \markdef{upper} and \markdef{lower density of \( x \) at} \( A \) are \[ \mathscr{D}^ + _A ( x ) = \limsup_{ \varepsilon {\downarrow} 0}\frac{ \mu ( A \cap \Ball ( x ; \varepsilon ))}{ \mu ( \Ball ( x ; \varepsilon ) )} ,\qquad \mathscr{D}^- _A ( x ) = \liminf_{ \varepsilon {\downarrow} 0}\frac{ \mu ( A \cap \Ball ( x ; \varepsilon ))}{ \mu ( \Ball ( x ; \varepsilon ) )} . \] The \markdef{oscillation of \( x \) at} \( A \) is \[ \mathscr{O}_A ( x ) = \mathscr{D}^ + _A ( x ) - \mathscr{D}^- _A ( x ) . \] When \( \mathscr{O}_A ( x ) = 0 \), that is to say: \( \mathscr{D}^ + _A ( x ) \leq \mathscr{D}^- _A ( x ) \), the value \( \mathscr{D}^ + _A ( x ) = \mathscr{D}^- _A ( x ) \) is called the \markdef{density of \( x \) at \( A \)} \[ \mathscr{D}_A ( x ) = \lim_{ \varepsilon {\downarrow} 0}\frac{ \mu ( A \cap \Ball ( x ; \varepsilon ))}{ \mu ( \Ball ( x ; \varepsilon ) )} . \] It is important that in the computation of \( \mathscr{D} _A \) and \( \mathscr{O}_A \) balls of every radius \( \varepsilon \) be considered, and not just for \( \varepsilon \) ranging over a countable set --- see Section~\ref{subsec:basisofdifferentiation}. Note that if \( \mu ( \set{x} ) > 0 \) and \( x \in A \), then \( \mathscr{D}_A ( x ) = 1 \) for trivial reasons.
The limit \( \mathscr{D}_A ( x ) \) does not exist if and only if \( \mathscr{O}_A ( x ) > 0 \). In any case if \( A =_\mu B \) then \begin{align*}
\mathscr{D}_{ A^\complement} ( x ) & = 1 - \mathscr{D}_A ( x ) , & \mathscr{D}_A ( x ) & = \mathscr{D}_B ( x ) , \\
\mathscr{O}_{ A^\complement } ( x ) & = \mathscr{O}_A ( x ) , & \mathscr{O}_A ( x ) & = \mathscr{O}_B ( x ) , \end{align*} in the sense that if one of the two sides of the equations exists, then so does the other one, and their values are equal. Let \begin{equation}\label{eq:Phi}
\Phi ( A ) = \setofLR{ x \in X}{ \mathscr{D}_A ( x ) = 1 } . \end{equation} The set of \markdef{blurry points of \( A \)} is \[ \Blur ( A ) = \setof{ x \in X }{ \mathscr{O}_A ( x ) > 0 }, \] the set of \markdef{sharp points of \( A \)} is \[ \Sharp ( A ) = \setof{ x \in X }{ \mathscr{D}_A ( x ) \in ( 0 ; 1 ) } \] and \[ \Exc ( A ) \equalsdef \Blur ( A ) \cup \Sharp ( A ) \] is the set of \markdef{exceptional points of \( A \)}. For \( x \in X \) let \[ \begin{split} \exc_A ( x ) & = \sup \setof{ \delta \leq 1 / 2 }{ \delta \leq \mathscr{D}^- _A ( x ) \leq \mathscr{D}^+ _A ( x ) \leq 1 - \delta } \\
& = \min \setLR{ \mathscr{D}_A^- ( x ) , 1 - \mathscr{D}_A^+ ( x ) } ,
\\
& \leq 1 / 2 . \end{split} \] If \( x \in \Phi ( A ) \cup \Phi ( A^\complement ) \) then \( \exc_A ( x ) = 0 \), so this notion is of interest only when \( x \in \Exc ( A ) \). Let \begin{equation*} \boldsymbol{ \delta }_A = \sup \setof{ \exc_A ( x ) }{ x \in X } \leq 1 / 2 . \end{equation*} If \( A \) is either null or co-null, then \( \boldsymbol{ \delta }_A = 0 \), so this justifies the restriction to nontrivial sets in the following definition: \begin{equation}\label{eq:delta(X)} \boldsymbol{ \delta } ( X ) = \inf \setof{ \boldsymbol{ \delta }_A }{ \eq{A} \in \MALG \setminus \set{ \eq{\emptyset} , \eq{X} } } . \end{equation} The following are easily checked. \begin{subequations} \begin{gather} \mathscr{O}_A ( x ) = 0 \mathbin{\, \Rightarrow \,} \exc_A ( x ) = \min \setLR{ \mathscr{D}_A ( x ) , 1 - \mathscr{D}_A ( x ) } \\ \exc_A ( x ) = 1 / 2 \mathbin{\, \Leftrightarrow \,} \mathscr{D}_A ( x ) = 1 / 2 \\
\exc_A ( x ) = 0 \mathbin{\, \Leftrightarrow \,} x \in \Phi ( A ) \cup \Phi ( A^\complement ) \mathbin{\, \vee \,} \mathscr{D}^-_A ( x ) = 0 \mathbin{\, \vee \,} \mathscr{D}^+_A ( x ) = 1 \\ \boldsymbol{ \delta }_A = 0 \mathbin{\, \Rightarrow \,} \Sharp ( A ) = \emptyset . \end{gather} \end{subequations}
\begin{remarks}\label{rmks:exc} \begin{enumerate-(a)} \item If \( A \) is clopen and nontrivial in \( X \), then \( \boldsymbol{ \delta } _A = 0 \), so if \( X \) is disconnected, then \( \boldsymbol{ \delta } ( X ) = 0 \). In particular \( \boldsymbol{ \delta } ( X ) = 0 \) when \( X \) is a closed subset of the Baire space \( \pre{\omega}{\omega} \). The case when \( X = \mathbb{R} \) is completely different---see Section~\ref{sec:solidsets}. \item The notions above are \( =_\mu \)-invariant, that is if \( A =_\mu B \) then \( \exc_A ( x ) = \exc_B ( x ) \), \( \boldsymbol{ \delta }_A = \boldsymbol{ \delta }_B \), and \( \Blur ( A ) = \Blur ( B ) = \Blur ( A^\complement ) \), and similarly for \( \Sharp \) and \( \Exc \). \end{enumerate-(a)} \end{remarks}
\subsection{Density in the real line}\label{subsec:realline} Let \( \lambda \) be the Lebesgue measure on \( \mathbb{R} \). For \( A \subseteq \mathbb{R} \) a measurable set, the \markdef{right density} of \( A \) at \( x \) is defined as \[ \mathscr{D}_A ( x^ + ) = \lim_{ \varepsilon {\downarrow} 0} \frac{ \lambda ( A \cap ( x ; x + \varepsilon ) )}{ \varepsilon } , \] and the \markdef{left density} \( \mathscr{D}_A ( x^ - ) \) is defined similarly. If \( \mathscr{D}_A ( x^ + ) \) and \( \mathscr{D}_A ( x ^- ) \) both exist, then \( \mathscr{D}_A ( x ) \) exists, and in this case \[ \mathscr{D}_A ( x ) = \frac{\mathscr{D}_A ( x^ + ) + \mathscr{D}_A ( x ^- )}{2} . \] Conversely, \[ \mathscr{D}_A ( x ) \in \setLR{0 , 1} \Rightarrow \mathscr{D}_A ( x^ + ) = \mathscr{D}_A ( x ^- ) = \mathscr{D}_A ( x ) . \] This result cannot be extended to other values.
\begin{example}\label{xmp:densitybutnoleftorrightdensities} The set \[ A = \bigcup_{n} ( - 2^{ - 2 n - 1 } ; - 2^{ - 2 n - 2 } ) \cup ( 2^{ - 2n - 1} ; 2^{ -2n } ) \] is open and such that \( \mathscr{D}_A ( 0 ) = 1 / 2 \), but \( \mathscr{D}_A ( 0^ + ) \) and \( \mathscr{D}_A ( 0 ^-) \) do not exist. \end{example}
\subsection{Density in the Cantor and Baire spaces} Suppose \( T \) is a pruned tree on \( \omega \), \( \mu \) is a finite Borel measure on \( \body{T} \) induced by some \( w \colon T \to [ 0 ; M ] \) as in Section~\ref{subsubsec:measureonCantor}. Since the metric attains values in \( \setLR{0} \cup \setofLR{2^{-n}}{n \in \omega } \), then \[ \mathscr{D}^ + _A ( z ) = \limsup_{ n \to \infty } \frac{ \mu ( A \cap {\boldsymbol N}\!_{z \mathpunct{\upharpoonright} n} )}{ w ( z \mathpunct{\upharpoonright} n ) } , \qquad \mathscr{D}^- _A ( z ) = \liminf_{ n \to \infty } \frac{ \mu ( A \cap {\boldsymbol N}\!_{z \mathpunct{\upharpoonright} n} )}{ w ( z \mathpunct{\upharpoonright} n ) } . \] In particular, when \( T = \pre{ < \omega }{2} \) and \( \mu = \mu^{\mathrm{C}} \), then \( w ( s ) = 2^{- \lh s } \) so \( \frac{ \mu ( A \cap {\boldsymbol N}\!_{z \mathpunct{\upharpoonright} n} )}{ w ( z \mathpunct{\upharpoonright} n ) } = \mu ( \LOC{A}{s} ) \), and the equations above become \[ \mathscr{D}^ + _A ( z ) = \limsup_{ n \to \infty }\mu ( \LOC{A}{ z \mathpunct{\upharpoonright} n } ) , \qquad \mathscr{D}^- _A ( z ) = \liminf_{ n \to \infty } \mu ( \LOC{A}{ z \mathpunct{\upharpoonright} n } ) . \]
\subsection{Bases for density}\label{subsec:basisofdifferentiation} Let \( ( X , d , \mu ) \) be a fully supported, locally finite metric measure space. Although the definition of \( \mathscr{D}^\pm_A ( x ) \) requires that balls centered in \( x \) of all radii be considered, it is possible to compute the limit along some specific sequences converging to \( 0 \).
\begin{definition}\label{def:densitybasis} Suppose \( \varepsilon_n {\downarrow} 0 \) and let \( x\in X \). \begin{enumerate-(i)} \item\label{def:densitybasis-i} \( ( \varepsilon _n )_n \) is a \markdef{basis for density at} \( x \) if for all \( A \in \MEAS_\mu \) and all \( r \in [ 0 ; 1 ] \) \[ \lim_n \frac{ \mu ( A \cap \Ball ( x ; \varepsilon_n ) ) }{ \mu ( \Ball ( x ; \varepsilon_n ) ) } = r \mathbin{\, \Rightarrow \,} \mathscr{D}_A ( x ) = r . \] \item\label{def:densitybasis-ii} \( ( \varepsilon _n )_n \) is a \markdef{strong basis for density at} \( x \) if for all \( A \in \MEAS_\mu \) \[ \limsup_{n \to \infty } \frac{ \mu ( A\cap \Ball ( x ; \varepsilon_n ) ) }{\mu ( \Ball ( x ; \varepsilon_n ) ) } = \mathscr{D}^+_A ( x ) . \] \end{enumerate-(i)} \end{definition}
If \( ( \varepsilon _n )_n \) is a strong basis for density at \( x \), then by taking complements \[ \liminf_{n \to \infty } \frac{ \mu ( A\cap \Ball ( x ; \varepsilon_n ) ) }{\mu ( \Ball ( x ; \varepsilon_n ) ) } = \mathscr{D}^-_A ( x ) \] for all \( A \in \MEAS_\mu \). The sequence \( \varepsilon _n = 2^{-n} \) is a strong basis for density at every point, both in the Cantor and in the Baire space.
\begin{theorem}\label{thm:densitybasis} Suppose \( \varepsilon_n {\downarrow} 0 \). \begin{enumerate-(a)} \item\label{thm:densitybasis-a} If \( \lim_n \frac{ \mu ( \Ball ( x ; \varepsilon_{n + 1} ) ) }{ \mu ( \Ball ( x ; \varepsilon_n ) ) } = 1 \) then \( ( \varepsilon_n )_n \) is a strong basis for density at \( x \).
\item\label{thm:densitybasis-b} If \( ( \varepsilon_n )_n \) is a basis for density at \( x \), and \( r \mapsto \mu ( \Ball ( x ; r ) ) \) is continuous, then \( \lim_n \frac{ \mu ( \Ball ( x ; \varepsilon_{n + 1} ) ) }{ \mu ( \Ball ( x ; \varepsilon_n ) ) } = 1 \), and hence \( ( \varepsilon_n )_n \) is a strong basis for density at \( x \) \end{enumerate-(a)} \end{theorem}
\begin{proof} \ref{thm:densitybasis-a} Let \( A \) be measurable, and suppose \[ \limsup_{n \to \infty} \frac{\mu ( \Ball ( x ; \varepsilon _n ) \cap A )}{ \mu ( \Ball ( x ; \varepsilon _n ) )} = r . \] Thus \( \mathscr{D}^+_A ( x ) \geq r \). To prove the reverse inequality we must show that \[
\FORALL{ \varepsilon > 0} \EXISTS{ \delta > 0} \FORALL{0 < \eta < \delta }\Bigl [ \frac{\mu ( \Ball ( x ; \eta ) \cap A )}{ \mu ( \Ball ( x ; \eta ) )} < r + \varepsilon \Bigr ]. \] For each \( \varepsilon > 0 \) choose \( n_1 = n_1 ( \varepsilon ) \in \omega \) be such that \[ m \geq n_1 \mathbin{\, \Rightarrow \,} \frac{\mu ( \Ball ( x ; \varepsilon _m ) \cap A )}{ \mu ( \Ball ( x ; \varepsilon _m ) )} < r + \frac{\varepsilon}{2} . \] We must takes cases depending whether \( r \) is null or otherwise.
Suppose first \( r = 0 \). For \( 0 < \varepsilon < 1 \) and let \( n_2 = n_2 ( \varepsilon ) \in \omega \) be such that \[ m \geq n_2 \mathbin{\, \Rightarrow \,} \frac{\mu ( \Ball ( x ; \varepsilon _{m } ) )}{ \mu ( \Ball ( x ; \varepsilon _{ m + 1} ) )} \leq 1 + \varepsilon . \] We claim that \( \delta = \varepsilon _{\bar{n}} \) will do, when \( \bar{n} = \max ( n_1 , n_2 ) \). Let \( 0 < \eta < \delta \). Since \( \varepsilon _n { \downarrow } 0 \), fix \( k \geq \bar{n} \) such that \begin{equation}\label{eq:th:densitybasis2}
\varepsilon _{ k + 1} < \eta \leq \varepsilon _k . \end{equation} Then \begin{align*}
\frac{\mu ( \Ball ( x ; \eta ) \cap A )}{ \mu ( \Ball ( x ; \eta ) ) } & \leq \frac{\mu ( \Ball ( x ; \varepsilon _k ) \cap A )}{ \mu ( \Ball ( x ; \varepsilon _{ k + 1 } ) ) } & \text{by~\eqref{eq:th:densitybasis2}}
\\
& = \frac{\mu ( \Ball ( x ; \varepsilon _k ) \cap A )}{ \mu ( \Ball ( x ; \varepsilon _{ k } ) ) } \frac{\mu ( \Ball ( x ; \varepsilon _k ) )}{ \mu ( \Ball ( x ; \varepsilon _{ k + 1 } ) ) }
\\
& \leq \frac{ \varepsilon ( 1 + \varepsilon ) }{2}
\\
& < \varepsilon . \end{align*}
Suppose now \( r > 0 \), and choose \( 0 < \varepsilon < r \). Let \( n_2 = n_2 ( \varepsilon ) \) be such that \[ m \geq n_2 \Rightarrow \frac{ \mu ( \Ball ( x ; \varepsilon _m ) ) }{ \mu ( \Ball ( x ; \varepsilon _{ m + 1 } ) ) } \leq 1 + \frac{ \varepsilon }{ 4 r } . \] The argument is as before: let \( \delta = \varepsilon _{ \bar{n} } \) where \( \bar{n} = \max ( n_1 , n_2 ) \), and given \( 0 < \eta < \delta \), fix \( k \geq \bar{n} \) such that \( \varepsilon _{ k + 1 } < \eta \leq \varepsilon _k \). Then \[ \begin{split}
\frac{\mu ( \Ball ( x ; \eta ) \cap A )}{ \mu ( \Ball ( x ; \eta ) ) } & \leq \frac{\mu ( \Ball ( x ; \varepsilon _k ) \cap A )}{ \mu ( \Ball ( x ; \varepsilon _{ k } ) ) } \frac{\mu ( \Ball ( x ; \varepsilon _k ) )}{ \mu ( \Ball ( x ; \varepsilon _{ k + 1 } ) ) } \\
& \leq \left ( r + \frac{ \varepsilon }{ 2 } \right ) \left ( 1 + \frac{ \varepsilon }{ 4 r } \right )
\\
& < r + \varepsilon . \end{split} \]
\ref{thm:densitybasis-b} Towards a contradiction, suppose there is \( r < 1 \) and a subsequence \( ( \varepsilon_{n_k} )_k \) such that \[ \lim_{k \to \infty } \frac{ \mu ( \Ball ( x ; \varepsilon_{ n_k + 1 } ) ) }{ \mu ( \Ball ( x ; \varepsilon_{n_k} ) ) } = r . \] For each \( n \), let \( \delta_n \in ( \varepsilon_{n + 1} ; \varepsilon_n ) \) be such that \( \mu ( \Ball ( x ; \delta_n ) ) = \frac{1}{2}[ \mu ( \Ball ( x ; \varepsilon_{n + 1 } ) ) + \mu ( \Ball ( x ; \varepsilon_n ) ) ] \). Define \[ A = \bigcup_{n }( \Ball ( x ; \delta_n ) \setminus \Ball ( x ; \varepsilon_{ n + 1 } ) ) . \]
Then \( \mu ( A \cap \Ball ( x ; \varepsilon_n ) ) / \mu ( \Ball ( x ; \varepsilon_n ) ) = \frac 12 \). On the other hand, \[ \begin{split} \frac{\mu (A \cap \Ball ( x ; \delta_n ) ) }{\mu ( \Ball ( x ; \delta_n ) ) }& = \frac{ \frac{1}{2} [ \mu ( \Ball ( x ; \varepsilon_n ) ) - \mu ( \Ball ( x ; \varepsilon_{n + 1 } ) )] + \frac{1}{2}[ \mu ( \Ball ( x ; \varepsilon_{n + 1 } ) ) ] }{ \frac{1}{2}[ \mu ( \Ball ( x ; \varepsilon_{n + 1 } ) ) + \mu ( \Ball ( x ; \varepsilon_n ) ) ] }
\\ & = \frac{\mu ( \Ball ( x ; \varepsilon_n ) ) }{\mu ( \Ball ( x ; \varepsilon_{n + 1 } ) ) + \mu ( \Ball ( x ; \varepsilon_n ) ) } \\ &= \Bigl ( \frac{\mu ( \Ball ( x ; \varepsilon_{n + 1 } ) ) }{\mu ( \Ball ( x ; \varepsilon_n ) ) } + 1 \Bigr )^{-1} . \end{split} \] Since \( \bigl ( \frac{ \mu ( \Ball ( x ; \varepsilon_{n_k + 1 } ) )}{ \mu ( \Ball ( x ; \varepsilon_{n_k} ) ) } + 1 \bigr )^{-1} \rightarrow \frac {1}{r + 1} > \frac {1}{2} \), then \( (\varepsilon_n )_n \) is not a basis for density at \( x \). \end{proof}
The next Example shows that ``\( \lim \)'' cannot be replaced by ``\( \limsup \)'' in the statement of Theorem~\ref{thm:densitybasis}.
\begin{example}\label{xmp:oscillatingdensity} If \( \mu \) is nonsingular then for any \( x \in X \) there is a set \( A \in \MEAS_\mu \) such that for some sequence \( \varepsilon _n {\downarrow} 0 \), \[ \lim_{n \to \infty} \frac{ \mu ( A \cap \Ball ( x ; \varepsilon_{2n} ) ) }{ \mu ( \Ball ( x ; \varepsilon_{2n} ) ) } = 1 \quad \text{and} \quad \lim_{n \to \infty} \frac{ \mu ( A \cap \Ball ( x ; \varepsilon_{2n + 1} ) )}{ \mu (\Ball ( x ; \varepsilon_{2 n + 1} ) ) } = 0 , \] hence \( \mathscr{O}_A ( x ) = 1 \). Moreover \( A \) can be taken to be open or closed.
Choose \( ( \varepsilon _n )_n \) strictly decreasing, converging to \( 0 \), and such that \begin{equation}\label{eq:convergenceCamillo} \lim_{n \to \infty} \frac{\mu ( \Ball ( x ; \varepsilon_{n + 1} ) )}{ \mu ( \Ball ( x ; \varepsilon_{n} ) ) } = 0 . \end{equation} This can be done as \( \mu \) is nonsingular. Let \[ A = \bigcup_{n} \Ball ( x ; \varepsilon_{2n} ) \setminus \Ball ( x ; \varepsilon_{2n + 1} ) . \] Then \[ \frac{\mu ( A \cap \Ball ( x ; \varepsilon_{2n} ) )}{\mu ( \Ball ( x ; \varepsilon_{2n} ) ) } > \frac{\mu ( \Ball ( x ; \varepsilon_{2n} ) \setminus \Ball ( x ; \varepsilon_{2n + 1} ) )}{\mu ( \Ball ( x ; \varepsilon_{2n} ) ) } = 1 - \frac{ \mu ( \Ball ( x ; \varepsilon_{2n + 1} ) )}{\mu ( \Ball ( x ; \varepsilon_{2n} ) ) } \to 1 \] and \[ \frac{\mu ( A \cap \Ball ( x ; \varepsilon_{2n + 1} ) )}{\mu ( \Ball ( x ; \varepsilon_{2n + 1} ) ) } < \frac{\mu ( \Ball ( x ; \varepsilon_{2n + 2} ) )}{\mu ( \Ball ( x ; \varepsilon_{2n + 1} ) ) } \to 0 . \]
To construct an \( A \) which is open or close, argue as follows. Let \( ( \varepsilon '_n )_n {\downarrow} 0 \) and satisfying~\eqref{eq:convergenceCamillo} and let \( \varepsilon _n = \varepsilon _{2n}' \). Then \( \bigcup_{n} \Ball ( x ; \varepsilon_{2n} ) \setminus \Cl \Ball ( x ; \varepsilon_{2n + 1} ) \) and \( \set{x} \cup \bigcup_{n} \Cl \Ball ( x ; \varepsilon_{2n} ) \setminus \Ball ( x ; \varepsilon_{2n + 1} ) \) are as required, and are open and closed, respectively. \end{example}
\subsection{The function \( \Phi \)} Let us list some easy facts about the map \( \Phi \) introduced in~\eqref{eq:Phi}: \begin{itemize}[leftmargin=1pc] \item \( A \subseteq_\mu B \mathbin{\, \Rightarrow \,} \Phi ( A ) \subseteq \Phi ( B ) \), and therefore \( A =_\mu B \Rightarrow \Phi ( A ) = \Phi ( B ) \). Thus the map \( \MALG_\mu \to \mathscr{P} ( X ) \), \( \eq{A} \mapsto \Phi ( A ) \), is well-defined; \item \( \Phi ( A \cap B ) = \Phi ( A ) \cap \Phi ( B ) \) hence \( \Phi ( A^\complement ) \subseteq ( \Phi ( A ) )^\complement \); \item \( \Phi ( A \cup B ) \supseteq \Phi ( A ) \cup \Phi ( B ) \); and more generally \( \Phi ( \bigcup_{i \in I} A_i ) \supseteq \bigcup_{i \in I} \Phi ( A_i ) \), provided \( \bigcup_{i \in I} A_i \in \MEAS_\mu \); \item \( \Phi ( U ) \supseteq U \), for \( U \) open, and \( \Phi ( C ) \subseteq C \), for \( C \) closed; \item
\( \Phi (C_1 \cup C_2 ) = \Phi ( C_1 ) \cup \Phi ( C_2 ) \), if \( C_1 , C_2 \) are disjoint closed sets. \end{itemize}
\begin{definition}\label{def:DPP} A Radon metric space \( ( X , d , \mu ) \) has the \markdef{Density Point Property} (DPP) if \( A \mathop{\triangle} \Phi ( A ) \in \NULL \) for each \( A \in \MEAS_\mu \). \end{definition}
Thus in a DPP space almost every point is in \( \Phi ( A ) \cup \Phi ( A^\complement ) \), so \( \Exc ( A ) \), \( \Blur ( A ) \), and \( \Sharp ( A ) \) are null. The Lebesgue density theorem states that \( \mathbb{R}^n \) with the Lebesgue measure \( \lambda^n \) and the \( \ell_p \)-norm has the DPP, and this result holds also for \( \pre{\omega }{2} \) with \( \mu^{\mathrm{C}} \) and the standard ultrametric. In fact if \( \mu \) is a Borel measure on an ultrametric space \( ( X , d ) \), then \( ( X , d , \mu ) \) has the DPP~\cite[]{Miller:2008fk}. Not every Polish measure space is DPP~\cite[][Example 5.6]{Kaenmaki:2015sf}.
\subsection{The complexity of the density function} \begin{proposition} If \( ( X , d , \mu ) \) is separable and \( \mu \) finite, then the map \[ X \times \cointerval{ 0 }{ + \infty } \to \cointerval{ 0 }{ + \infty } , \qquad ( x , r ) \mapsto \mu ( \Ball ( x ; r ) ) \] is Borel. \end{proposition}
\begin{proof} By multiplying by a suitable number, we may assume that \( \mu \) is a probability measure. By~\cite[][Theorem 17.25]{Kechris:1995kc} with \[ A = \setofLR{ ( x , r , y ) \in X \times \cointerval{ 0 }{ + \infty } \times X}{ d ( x , y ) < r} \] then \( ( x , r ) \mapsto \mu ( A_{( x , r )} ) = \mu ( \Ball ( x ; r ) ) \) is Borel. \end{proof}
Several results can be proved under the assumption that either the measure is continuous or else that the space is a closed subset of the Baire space. The next definition aims at generalize both situations.
\begin{definition}\label{def:amenablespace} A fully supported Radon metric space \( ( X , d , \mu ) \) is \markdef{amenable} if there are functions \( \varepsilon _n \colon X \to \cointerval{0}{+\infty} \) such that \begin{itemize} \item \( ( \varepsilon _n ( x ) )_n \) is a strong basis for density at \( x \), for all \( x \in X \), \item the map \( X \to \MALG \), \( x \mapsto \eq{\Ball ( x ; \varepsilon _n ( x ) ) } \) is continuous, for all \( n \in \omega \). \end{itemize} \end{definition}
\begin{examples} \begin{enumerate-(a)} \item If \( \mu \) is continuous, then \( ( X , d , \mu ) \) is amenable. In fact , let \( \varepsilon_n ( x ) \) be largest \( \leq 1 \) such that \( \mu ( \Ball ( x ; \varepsilon _n ( x ) ) ) \leq 1 / n \). By Theorem~\ref{thm:densitybasis} \( ( \varepsilon _n ( x ) )_n \) is a strong basis for density; since the \( \varepsilon _n \) are continuous, by Lemma~\ref{lem:continuityofmeasure} \( x \mapsto \eq{\Ball ( x ; \varepsilon _n ( x ) ) } \) is continuous. \item If \( X \) is a closed subset of the Baire space and \( d \) is the induced metric, then \( ( X , d , \mu ) \) is amenable, as taking \( \varepsilon_n ( x ) = 2^{-n} \) the map \( x \mapsto \Ball ( x ; \varepsilon _n ( x ) ) \) is locally constant. \end{enumerate-(a)} \end{examples}
\begin{lemma}\label{lem:amenable} Suppose \( ( X , d , \mu ) \) is amenable. Then \[ f_n \colon X \times \MALG \to [ 0 ; 1 ] , \quad ( x , \eq{A} ) \mapsto \frac{ \mu ( A \cap \Ball ( x ; \varepsilon _n ( x ) ) ) }{ \mu ( \Ball ( x ; \varepsilon _n ( x ) ) ) } \] is continuous. \end{lemma}
\begin{proof} It is enough to show that \( ( x , \eq{A} ) \mapsto \mu ( A \cap \Ball ( x ; \varepsilon _n ( x ) ) ) \) is continuous. This follows from the continuity of \( \hat{ \mu } \colon \MALG \to [ 0 ; + \infty ] \), and \begin{multline*} \cardLR{ \mu \left ( \Ball ( x ; \varepsilon _n ( x ) ) \cap A \right ) - \mu \left ( \Ball ( x' ; \varepsilon _n ( x' ) ) \cap A' \right ) } \\ \leq \mu \Bigl ( \bigl ( \Ball ( x ; \varepsilon _n ( x ) ) \cap A \bigr ) \mathop{\triangle} \bigl ( \Ball ( x' ; \varepsilon _n ( x' ) ) \cap A ' \bigr )\Bigr ) \\
\leq \mu \left ( \Ball ( x ; \varepsilon _n ( x ) ) \mathop{\triangle} \Ball ( x' ; \varepsilon _n ( x' ) ) \right ) + \mu ( A \mathop{\triangle} A' ) . \qedhere \end{multline*}
\end{proof}
\begin{lemma}\label{lem:densityBaire2} If \( ( X , d , \mu ) \) is amenable, then \( \mathscr{D}^+ \colon X \times \MALG \to [ 0 ; 1 ] \), \( ( x , \eq{A} ) \mapsto \mathscr{D}_A^+ ( x ) \) is in \( \mathscr{B}_2 \), and similarly for \( \mathscr{D}^- \) and \( \mathscr{O} \). \end{lemma}
\begin{proof} Let \( f_n \) be as in Lemma~\ref{lem:amenable}. Then \( g_n ( x , \eq{A} ) = \sup_{m \geq n} f_m ( x , \eq{A} ) \) is in \( \mathscr{B}_1 \), and therefore \( \limsup_n f_n ( x , \eq{A} ) = \lim_n g_n ( x , \eq{A} ) \) is in \( \mathscr{B}_2 \). As \( ( \varepsilon _n ( x ) )_n \) is a strong basis for density in \( x \), it follows that \( \mathscr{D}_A^+ ( x ) = \lim_n g_n ( x , \eq{A} ) \). The case of \( \mathscr{D}^- \) and of \( \mathscr{O} \) is similar. \end{proof}
By taking the preimage of \( \setLR{1} \) via the map \( \mathscr{D}^-_A \) we get
\begin{corollary}\label{cor:PhiPi03} If \( ( X , d , \mu ) \) is amenable, then \( \Phi ( A ) \in \boldsymbol{\Pi}^{0}_{3} \). \end{corollary}
The complexity cannot be lowered in Corollary~\ref{cor:PhiPi03} when \( X \) is the real line or the Cantor space. When \( A \subseteq \pre{\omega }{2} \) is nontrivial, if \( \Phi ( A ) \) has empty interior, then it is \( \boldsymbol{\Pi}^{0}_{3} \)-complete~\cite[Theorem 1.3]{Andretta:2013uq}. If \( K \subseteq \mathbb{R} \) is a sufficiently regular Cantor set of positive measure, then \( \Phi ( K ) \) is \( \boldsymbol{\Pi}^{0}_{3} \)-complete~\cite{Carotenuto:2015kq}.
Notice that \[ x\in \Sharp ( A ) \mathbin{\, \Leftrightarrow \,} \mathscr{O}_A ( x ) = 0 \wedge \EXISTS{q \in \mathbb{Q}_+} \forall ^\infty n \left ( q \leq f_n ( x , A ) \leq 1-q \right ) \] where \( f_n \) is as in Lemma~\ref{lem:amenable}. Thus in the hypotheses of Lemma~\ref{lem:densityBaire2}, \[ \Blur ( A ) \in \boldsymbol{\Sigma}^{0}_{3}, \quad \Sharp ( A ) \in \boldsymbol{\Pi}^{0}_{3}, \quad \Exc ( A ) \in \boldsymbol{\Sigma}^{0}_{3} . \]
\subsection{Solid sets}\label{sec:solidsets} \begin{definition} Let \( ( X , d , \mu ) \) be a Radon metric space. A measurable \( A \subseteq X \) is \begin{itemize} \item \markdef{solid} iff \( \Blur ( A ) = \emptyset \), \item \markdef{quasi-dualistic} iff \( \Sharp (A ) = \emptyset \), \item \markdef{dualistic} iff it is quasi-dualistic and solid iff \( \Exc ( A ) = \emptyset \), \item \markdef{spongy} iff \( \Blur ( A ) \neq \emptyset = \Sharp (A ) \) iff it is quasi-dualistic but not solid. \end{itemize} \end{definition}
The collections of sets that are solid, dualistic, quasi-dualistic, or spongy are denoted by \( \Solid \), \( \Dual \), \( \qDual \), and \( \Spongy \). Also \[ \boldsymbol{\Delta}^{0}_{1} \subseteq \Dual = \Solid \cap \qDual . \] Therefore if the space \( X \) is disconnected, e.g. \( X = \pre{\omega }{2} \), there are nontrivial dualistic sets so adopting the notation of~\eqref{eq:delta(X)}, we conclude that \( \boldsymbol{ \delta } ( X ) = 0 \). In the Cantor space there are examples of dualistic sets that are not \( =_\mu \) to any clopen set, see~\cite[][Section 3.4]{Andretta:2013uq}.
The situation for \( \mathbb{R} \) is completely different: V.~Kolyada~\cite{Kolyada:1983fk} showed that \( 0 < \boldsymbol{ \delta } ( \mathbb{R} ) < 1 / 2 \), thus, in particular, there are no nontrivial dualistic subsets of \( \mathbb{R} \). The bounds for \( \boldsymbol{ \delta } ( \mathbb{R} ) \) were successively improved in~\cite{Szenes:2011fk,Csornyei:2008uq}, and in~\cite{Kurka:2011kx} it is shown that \( \boldsymbol{ \delta } ( \mathbb{R} ) \approx 0. 268486 \dots \) is the unique real root of \( 8 x^3 + 8 x^2 + x - 1 \). A curious consequence is that for each \( \varepsilon > 0 \) there are nontrivial sets \( A \subset \mathbb{R} \) such that \( \ran( \mathscr{D}_A ) \cap ( \boldsymbol{ \delta } ( \mathbb{R} ) + \varepsilon ; 1 - \boldsymbol{ \delta } ( \mathbb{R} ) - \varepsilon ) = \emptyset \); in other words, for any real \( x \) either \( \mathscr{D} _A ( x ) \in [ 0 ; \boldsymbol{ \delta } ( \mathbb{R} ) + \varepsilon ] \cup [ 1 - \boldsymbol{ \delta } ( \mathbb{R} ) - \varepsilon ; 1 ] \) or \( \mathscr{D}^+_A ( x ) \geq 1 - \boldsymbol{ \delta } ( \mathbb{R} ) - \varepsilon \) or else \( \mathscr{D}^-_A ( x ) \leq \boldsymbol{ \delta } ( \mathbb{R} ) + \varepsilon \). In particular, there is a set \( A \) that does not have points of density \( 1/2 \), in contrast with our intuition that a measurable subset of \( \mathbb{R} \) should have a ``boundary'' like an interval. We will show in Theorem~\ref{thm:solid} that this intuition is correct when \emph{solid} sets are considered.
Spongy subsets of \( \pre{\omega}{2} \) (or more generally, of closed subsets of \( \pre{\omega}{\omega} \)) are easy to construct, see~\cite[Example 3.8 in][]{Andretta:2013uq}. The existence of spongy subsets of connected spaces is more problematic. Theorem~\ref{thm:spongy} shows that there exist a spongy subset \( S \) of \( [ 0 ; 1 ] \), and for such \( S \) we have \( \boldsymbol{ \delta }_S \geq 1 / 3 \).
The families of sets \( \Solid \), \( \Dual \), \( \qDual \), and \( \Spongy \) are invariant under \( =_\mu \), so they can be defined on the measure algebra as well, that is to say: we can define \[
\widehat{\Solid} = \setof{\eq{A} \in \MALG }{ A \in \Solid } , \] and similarly for \( \widehat{\Dual} \), \( \widehat{\qDual} \) and \( \widehat{\Spongy} \).
\begin{proposition}\label{prop:solidsetBaireclassDensity} Let \( ( X , d , \mu ) \) be amenable and suppose that \( A \) is solid. Then \( \mathscr{D}_A \colon X \to [ 0 ; 1 ] \) is in \( \mathscr{B}_1 \). \end{proposition}
\begin{proof} Notice that \[ \mathscr{D}_A ( x ) > a \mathbin{\, \Leftrightarrow \,} \EXISTS{ q \in \mathbb{Q}_+ }\FORALLS{\infty}{ n } \Bigl ( \frac{ \mu ( A \cap \Ball ( x ; \varepsilon _n ( x ) ) ) }{ \mu ( \Ball ( x ; \varepsilon _n ( x ) ) ) } \geq a + q \Bigr ) \] and apply Lemma~\ref{lem:amenable}. Similarly for \( \mathscr{D}_A ( x ) < b \). \end{proof}
By the Baire category theorem we get:
\begin{corollary}\label{cor:solidsetBaireclassDensity} Let \( ( X , d , \mu ) \) be amenable and completely metrizable. If there are \( 0 \leq r < s \leq 1 \) such that \( \setofLR{x}{ \mathscr{D}_A ( x ) \leq r } \) and \( \setofLR{x}{ \mathscr{D}_A ( x ) \geq s } \) are dense in some nonempty open set, then \( A \notin \Solid \). \end{corollary}
\begin{proposition}\label{prop:solidsetGdelta} Let \( ( X , d , \mu ) \) be amenable and suppose that \( A \) is solid. Then \begin{enumerate-(a)} \item \( \Phi ( A ) , \Phi ( A^\complement ) \in \boldsymbol{\Pi}^{0}_{2} \), \item \( \Exc ( A ) = \Sharp ( A ) \in \boldsymbol{\Sigma}^{0}_{2} \), \item if \( 1 \) is an isolated value of \( \mathscr{D}_A \), that is to say \( \ran \mathscr{D}_A \subseteq [ 0 ; r ] \cup \setLR{1} \) for some \( r < 1 \), then \( \Phi ( A ) \in \boldsymbol{\Delta}^{0}_{2} \). \end{enumerate-(a)} In particular, if \( A \) is dualistic, then \( \Phi ( A ) \in \boldsymbol{\Delta}^{0}_{2} \). \end{proposition}
\begin{proof} By Proposition~\ref{prop:solidsetBaireclassDensity} \( \mathscr{D}_A \) is Baire class \( 1 \), and since \( \Phi ( A ) \) is the preimage of the singleton \( \setLR{1} \), then it is a \( \Gdelta \). If \( 1 \) is an isolated value of the density function, then \( \Phi ( A ) = \mathscr{D}_A^{-1} \ocinterval{ r }{ 1} \) is also \( \Fsigma \), thus it is a \( \boldsymbol{\Delta}^{0}_{2} \). \end{proof}
The (possibly partial) function \( \mathscr{D}_A \colon \pre{\omega }{2} \to [ 0 ;1 ] \) has graph \( \boldsymbol{\Pi}^{0}_{3} \), since \[
\mathscr{D}_ A ( x ) = r \mathbin{\, \Leftrightarrow \,} \FORALL{ \varepsilon } \EXISTS{ n } \FORALL{ k > n } \card{ \mu ( \LOC{A}{ x \mathpunct{\upharpoonright} k} ) - r } \leq \varepsilon \] and has domain \( \pre{\omega }{2} \setminus \Blur A \). So perhaps it is more natural to look at its extension \( \mathscr{D}_A^* \colon \pre{\omega }{2} \to [ 0 ;1 ] \cup \set{*} \), where \( * \) means undefined. It is an isolated point in \( [ 0 ;1 ] \cup \set{*} \).
\begin{proposition}
\( \mathrm{graph} ( \mathscr{D}_A^* ) \) is a boolean combination of \( \boldsymbol{\Pi}^{0}_{3} \) sets. \end{proposition}
\begin{proof}
\( ( z , r ) \in \mathrm{graph} ( \mathscr{D}_A^* ) \mathbin{\, \Leftrightarrow \,} \left ( \mathscr{O}_A ( z ) = 0 \wedge \mathscr{D}_A ( z ) = r \right ) \vee \left ( \mathscr{O}_A ( z ) > 0 \wedge r = * \right ) \)
\end{proof}
By~\cite[Theorem 1.7][]{Andretta:2013uq}, working in the Cantor space we have that \( \setof{\eq{A} \in \MALG}{ \Phi ( A ) \text{ is \( \boldsymbol{\Pi}^{0}_{3} \)-complete}} \) is comeager.
\begin{corollary} \( \setofLR{\eq{A} \in \MALG ( \pre{\omega}{2} ) }{ \Blur ( A ) \neq \emptyset } = \MALG \setminus \widehat{ \Solid } \) is comeager. \end{corollary}
We will prove later (Theorem~\ref{thm:blurrypointsSigma03}) that the set of blurry points can be \( \boldsymbol{\Sigma}^{0}_{3} \)-complete, and in fact this is the case on a comeager set in the measure algebra.
\section{Compact sets in the measure algebra}\label{sec:compactsetsinMALG} Suppose \( ( X , d , \mu ) \) is a separable Radon metric space and \( A \in \MEAS_\mu \). The \markdef{\( \mu \)-interior} of \( A \) is \[ \Int_\mu ( A ) = \bigcup \setof{ U \in \boldsymbol{\Sigma}^{0}_{1} ( X ) }{ U \subseteq_ \mu A } , \]
the \markdef{\( \mu \)-closure} of \( A \) is \[ \begin{split} \Cl_\mu ( A ) & = \bigcap \setof{ C \in \boldsymbol{\Pi}^{0}_{1} ( X ) }{ A \subseteq_ \mu C } \\
& = X \setminus \bigcup \setof{ U \in \boldsymbol{\Sigma}^{0}_{1} ( X ) }{ A \cap U \in \NULL_\mu } , \end{split} \] and the \markdef{\( \mu \)-frontier} of \( A \) is \[ \begin{split} \Fr_\mu ( A ) & = \Cl_\mu ( A ) \setminus \Int_\mu ( A ) \\
& = \setof{ x \in X }{ \FORALL{ U \in \boldsymbol{\Sigma}^{0}_{1} ( X )} ( x \in U \Rightarrow \mu ( A \cap U ) , \mu ( U \setminus A ) > 0 ) } . \end{split} \] Thus \( \Int_\mu ( A ) \) is open, and \( \Cl_\mu ( A ) \) and \( \Fr_\mu ( A ) \) are closed, and they behave like the usual topological operators, i.e. \( ( \Cl_\mu A )^ \complement = \Int_\mu ( A^ \complement ) \) and \( ( \Int_\mu A )^ \complement = \Cl_\mu ( A^ \complement ) \). (In~\cite{Andretta:2013uq} the sets \( \Cl_\mu ( A ) \) and \( \Int_\mu ( A ) \) were called the outer and inner supports of \( A \), and were denoted by \( \supt^+ ( A ) \) and \( \supt^- ( A ) \), respectively.) The \markdef{support of \( \mu \)} is \( \supt ( \mu ) = \Cl_\mu ( X ) \), and therefore \( \mu \) is fully supported if and only \( \supt ( \mu ) = X \).
Clearly \( \Int_\mu ( A ) \subseteq \Phi ( A ) \), and the inclusion can be proper; for example if \( ( X , d , \mu ) \) is fully supported, locally finite and DPP take \( A \) to be closed of positive measure with empty interior. We start with a trivial observation, that will turn out to be useful in the proof of Theorem~\ref{thm:solid}.
\begin{lemma}\label{lem:useless} Let \( ( X , d , \mu ) \) be a fully supported locally finite and DPP, and let \( A \in \MEAS_\mu \). Suppose \( \Fr_\mu A \) has nonempty interior. Then \( \Phi ( A ) \) and \( \Phi ( A^\complement ) \) are dense in \( \Int ( \Fr_\mu A ) \), so if \( ( X , d , \mu ) \) is amenable and completely metrizable, then \( A \) is not solid. \end{lemma}
\begin{proof} Let \( U \subseteq \Fr_\mu A \) be nonempty and open in \( X \): as \( U \) is disjoint from \( \Int_\mu ( A ) \cup \Int_\mu ( A^\complement ) \), then \( \mu ( A \cap U ) , \mu ( U \setminus A ) > 0 \), and therefore by DPP \( U \) intersects both \( \Phi ( A ) \) and \( \Phi ( A^\complement ) \). That \( A \) is not solid follows from Corollary~\ref{cor:solidsetBaireclassDensity}. \end{proof}
By separability \( \Cl_\mu A \) is the smallest closed set \( C \) such that \( A \subseteq_\mu C \), and therefore \( \Cl_\mu ( \Cl_\mu A ) = \Cl_\mu A \) by transitivity of \( \subseteq_\mu \). If \( C \) is closed, then \( C =_\mu \Cl_\mu ( C ) \), so \[ \FORALL{ C , D \in \boldsymbol{\Pi}^{0}_{1}} \left ( \Cl_\mu ( C ) =_\mu \Cl_\mu ( D ) \Rightarrow C =_\mu D \right ) \] hence, since the operator \( \Cl_\mu \) is \( =_\mu \)-invariant, \[ \FORALL{ A , B \in \MEAS_\mu }\left ( \Cl_\mu ( A ) =_\mu \Cl_\mu ( B ) \Rightarrow \Cl_\mu ( A ) = \Cl_\mu ( B ) \right ) . \] Therefore \( \Cl_\mu \) is a selector for the family \( \mathscr{F} \) defined below.
\begin{definition} If \( X \) is a topological space with a Borel measure \( \mu \), let \begin{align*} \mathscr{F} ( X , \mu ) & = \setof{ \eq{C} \in \MALG ( X , \mu )}{ C \text{ is closed}} \\ \mathscr{K} ( X , \mu ) & = \setof{ \eq{K} \in \MALG ( X , \mu )}{ K \text{ is compact}} . \end{align*} As usual the reference to \( X \) and/or \( \mu \) will be dropped whenever possible. \end{definition}
\begin{lemma}\label{lem:ClPhi=supt} If \( ( X , d , \mu ) \) is a separable Radon metric space, \( A \) is measurable, and \( A \subseteq_{\mu } \Phi ( A ) \), then \( \Cl \Phi ( A ) = \Cl_\mu A \). \end{lemma}
\begin{proof} First, \( \Phi (A ) \subseteq \Cl_\mu A \), since any point in \( ( \Cl_\mu A)^{ \complement } \) is contained in some open \( U \) with \( \mu ( A \cap U ) = \emptyset \). Consequently, \( \Cl \Phi ( A ) \subseteq \Cl_\mu A \).
Conversely, given \( x \in \Cl_\mu A \) and any open neighborhood \( U \) of \( x \), one has \( \mu ( U \cap A ) > 0 \), thus \( \mu ( U \cap \Phi ( A ) ) > 0 \), whence \( U \cap \Phi ( A ) \neq \emptyset \). It follows \( x \in \Cl \Phi ( A ) \). \end{proof}
Note that when \( X \) is DPP then the assumption \( A \subseteq_{\mu } \Phi ( A ) \) is automatically satisfied. If \( X \) is a closed subset of \( \pre{\omega}{\omega} \), that is \( X = \body{T} \) for some pruned tree \( T \) on \( \omega \), then \( X \) is DPP and \( \Cl_\mu A = \body{ \boldsymbol{D} ( A ) } \) where \begin{equation}\label{eq:densitytree}
\boldsymbol{D} ( A ) = \setofLR{ t \in T }{ \mu ( A \cap {\boldsymbol N}\!_t ) > 0 } \end{equation} is the tree of those basic open sets in which \( A \) is non-null~\cite[][Definition 3.3]{Andretta:2013uq}. Therefore
\begin{corollary}\label{cor:densitytree3} \( \boldsymbol{D} \body{ \boldsymbol{D} ( A ) } = \boldsymbol{D} ( A ) \), i.e. \( \boldsymbol{D} ( \Cl \Phi ( A ) ) = \boldsymbol{D} ( A ) \). \end{corollary}
A metric space is \markdef{Heine-Borel} if every closed ball is compact. It is easy to see that any such space is \( \Ksigma \) and Polish.
\begin{theorem}\label{thm:setofcompactsinMALG} Let \( ( X , d , \mu ) \) be a Heine-Borel space such that every compact set has finite measure. Then \( \mathscr{K} ( X , \mu ) \) and \( \mathscr{F} ( X , \mu ) \) are \( \boldsymbol{\Pi}^{0}_{3} \) in \( \MALG ( X , \mu ) \). \end{theorem}
\begin{proof} Fix \( \bar{x} \in X \), and let \( B_n = \setofLR{y \in X} {d ( \bar{x}, y ) \leq n + 1 } \) be the closed ball of center \( \bar{x} \) and radius \( n > 0 \).
First we prove that \( \mathscr{K} ( X , \mu ) \) is \( \boldsymbol{\Pi}^{0}_{3} \). Note that \[ \eq{A} \in \mathscr{K} \mathbin{\, \Leftrightarrow \,} \EXISTS{n} ( A \subseteq_\mu B_n ) \mathbin{\, \wedge \,} \mu ( A ) \geq \mu ( \Cl_\mu A ) \] and the right hand side is equivalent to \[
\underbracket[0.5pt]{ \exists n ( A \subseteq_\mu B_n ) }_{\upvarphi ( A )} \wedge \forall q \in \mathbb{Q}_+ \bigl ( \underbracket[0.5pt]{\exists n ( A \subseteq_\mu B_n ) \wedge \mu ( \Cl_\mu A ) > q}_{\uppsi ( A , q ) } \mathbin{\, \Rightarrow \,} \underbracket[0.5pt]{\mu ( A ) \geq q }_{\upchi (A , q ) } \bigr ) . \] The formulæ \( \upvarphi ( A ) \) and \( \upchi (A , q ) \) are easily seen to be \( \mathsf{\Sigma ^0_2} \) and \( \mathsf{\Pi^0_1} \) respectively, so it suffices to show that \( \uppsi ( A , q ) \) is \( \mathsf{\Sigma ^0_3} \). Let \( ( U_n )_n \) be a countable basis for \( X \). \[ \begin{split} \uppsi ( A , q ) & \mathbin{\Leftrightarrow } \exists n ( A \subseteq_\mu B_n ) \wedge \exists \varepsilon \in \mathbb{Q}_+ \forall n_0 ,\dots , n_h \in \omega \\ & \qquad\qquad [ \Cl_\mu A \subseteq U_{n_0} \cup \dots \cup U_{n_h} \Rightarrow q + \varepsilon < \mu ( U_{n_0} \cup \dots \cup U_{n_h} ) ]
\\
& \mathbin{\Leftrightarrow } \exists n ( A \subseteq_\mu B_n ) \wedge \exists \varepsilon \in \mathbb{Q}_+ \forall n_0 ,\dots , n_h \in \omega
\\
& \qquad\qquad [ \exists m_0 , \dots , m_k ( B_n \setminus ( U_{n_0} \cup \dots \cup U_{n_h} ) \subseteq U_{m_0} \cup \dots \cup U_{m_k} \wedge {}
\\
& \qquad\qquad\quad \mu ( A \cap ( U_{m_0} \cup \dots \cup U_{m_k} ) ) = 0 ) \Rightarrow q + \varepsilon < \mu ( U_{n_0} \cup \dots \cup U_{n_h} ) ] . \end{split} \] The premise of the implication is \( \mathsf{\Sigma ^0_2} \), so \( \uppsi (A , q ) \) is \( \mathsf{\Sigma^0_3} \), as required.
We now prove that \( \mathscr{F} ( X , \mu ) \) is \( \boldsymbol{\Pi}^{0}_{3} \). Notice that it is enough to show that \[ \eq{A} \in \mathscr{F} \mathbin{\, \Leftrightarrow \,} \FORALL{ n \in \omega} \bigl ( \eq{A} \cap \eq{B_n } \in \mathscr{K} \bigr ) \] and use the fact that \( \MALG^2 \to \MALG \), \( ( \eq{X} , \eq{Y} ) \mapsto \eq{ X \cap Y } \), is continuous. To establish the equivalence, suppose that \( A =_{\mu } F \) for some closed \( F \). Then \( \eq{A } \cap \eq{ B_n } = \eq{ F \cap B_n } \in \mathscr{K} \). Conversely, let \( C_n \) be compact such that \( C_n =_{\mu } A \cap B_n \); if \( F = \bigcup_{ n \in \omega } C_n \), then \( A =_{\mu } F \), concluding the proof. \end{proof}
\begin{lemma} \label{lem:suptisBaire1} Let \( X \) be compact, metric. Then the function \( f \colon \MALG ( X ) \to \KK ( X ) \) defined by \( f ( \eq{A} ) = \Cl_\mu A \) is in \( \mathscr{B}_1 \). \end{lemma}
\begin{proof} Let \( ( U_n )_n \) be a basis of \( X \) and fix an open subset \( U \subseteq X \). If \( A \subseteq X \) is measurable, then \[ \Cl_\mu A \subseteq U \mathbin{\, \Leftrightarrow \,} \EXISTS {n_0 , \ldots , n_h} \bigl ( U^{ \complement } \subseteq U_{ n_0 } \cup \ldots \cup U_{n_h} \mathbin{\, \wedge \,} \mu ( A \cap ( U_{n_0} \cup \ldots \cup U_{n_h} ) ) = 0 \bigr ) \] and this condition is \( \boldsymbol{\Sigma}^0_2 \) on \( \eq{A} \). Moreover, \[ \Cl_\mu A\cap U \neq \emptyset \mathbin{\Leftrightarrow } \mu ( A \cap U ) > 0 , \] which is an open condition on \( \eq{A} \). So the preimage under \( f \) of any open subset of \( \KK ( X ) \) is \( \boldsymbol{\Sigma}^0_2 \). \end{proof}
\begin{lemma} \label{lem:measurefunctionisBaire1} Let \( X \) be a separable metrizable Radon space whose measure is outer regular. Then the function \( g \colon \KK ( X ) \to [ 0 ; + \infty ] \) defined by \( g ( K ) = \mu ( K ) \) is in \( \mathscr{B}_1 \). \end{lemma}
\begin{proof} Let \( ( U_n )_{ n < \omega } \) be a basis of \( X \). Fix \( a \geq 0 \); then, for \( K \in \KK ( X ) \), one has \begin{multline*} a < \mu ( K ) \mathbin{\, \Leftrightarrow \,} {} \\ \EXISTS{ \varepsilon > 0 } \FORALL {n_0 , \ldots , n_h}\left ( K \subseteq U_{n_0} \cup \ldots \cup U_{n_h} \Rightarrow a + \varepsilon < \mu ( U_{n_0} \cup \ldots \cup U_{n_h} ) \right ) . \end{multline*} This condition is \( \boldsymbol{\Sigma}^0_2 \) on \( K \). For \( b > 0 \), one has \[ \mu ( K ) < b \mathbin{\, \Leftrightarrow \,} \EXISTS{n_0 , \ldots , n_h }\left ( \mu ( U_{n_0} \cup \ldots \cup U_{n_h} ) < b \mathbin{\, \wedge \,} K \subseteq U_{n_0} \cup \ldots \cup U_{n_h} \right ) , \] an open condition on \( K \). So, the preimage under \( g \) of an open subset of \( [ 0 ; +\infty ] \) is \( \boldsymbol{\Sigma}^0_2 \). \end{proof}
\begin{definition}\label{def:thickset} Suppose \( \mu \) is a Borel measure on a topological space \( X \), \( U \) is open and nonempty, and \( A \) is measurable. We say that \( A \) is \begin{itemize} \item \markdef{thick in} \( U \) if \( \mu ( A \cap V ) > 0 \) for all open nonempty sets \( V \subseteq U \), \item \markdef{co-thick in} \( U \) if \( A^\complement \) is thick in \( U \). \end{itemize} If \( U =_\mu X \) we simply say that \( A \) is thick/co-thick. \end{definition} Note that \( A \) is thick in \( U \) if and only if \( \Cl_\mu ( A ) \supseteq U \). In a DPP space, \( A \) is thick in an open set \( U \) iff \( \Phi ( A ) \) is dense in \( U \).
\begin{lemma}\label{lem:thick} Let \( ( X , d , \mu ) \) be a separable Radon metric space, with \( \mu \) nonsingular. If \( 0 < \mu ( A ) < \infty \) then for all \( \varepsilon > 0 \) there is a compact set \( K \subseteq A \) with empty interior and such that \( \mu ( A ) - \varepsilon < \mu ( K ) \). \end{lemma}
\begin{proof} Fix \( A \) and \( \varepsilon \) as above. Without loss of generality we may assume that \( \varepsilon < \mu ( A ) \). Let \( F \subseteq A \) be compact and such that \( \mu ( F ) > \mu ( A ) - \varepsilon / 2 \). Let \( \setof{ q_n }{ n \in \omega } \) be dense in \( X \) and by our assumption on \( \mu \) choose \( r_n > 0 \) such that \( \mu ( \Ball ( q_n ; r_n ) ) \leq \varepsilon 2^{- ( n + 2 ) } \), so that \( U = \bigcup_{ n \in \omega } \Ball ( q_n ; r_n ) \) has measure \( \leq \varepsilon / 2 \). Then \( K = F \setminus U \subseteq A \) is compact with empty interior and \( \mu ( K ) \geq \mu ( F ) - \varepsilon / 2 > \mu ( A ) - \varepsilon \). \end{proof}
\begin{theorem}\label{thm:thick&cothick} Suppose \( ( X , d , \mu ) \) is separable, fully supported Radon metric space, with \( \mu \) nonsingular. Then there is a \( \Ksigma \) set which is thick and co-thick. \end{theorem}
\begin{proof} As \( X \) is second countable and \( \mu \) is locally finite, fix a base \( \setof{U_n}{ n \in \omega } \) for \( X \) such that \( 0 < \mu ( U_n ) < \infty \) for all \( n \). We inductively construct compact sets \( C_n \) for \( n \in \omega \) with empty interior such that \( \FORALL{i \leq n} ( \mu ( U_i \cap \bigcup_{j \leq n} C_j ) > 0 ) \). Let \( \tilde{n} \geq n \) be least such that \( U_{\tilde{n}} \subseteq U_n \setminus \bigcup_{j < n } C_j \). By Lemma~\ref{lem:thick} choose \( C_n \subseteq U_{\tilde{n}} \) compact with empty interior and such that \( 0 < \mu ( C_n ) \leq 2^{-n - 2 } \min \setof{\mu ( U_{\tilde{m} } ) }{ m \leq n } \).
Clearly \( F = \bigcup_{n} C_n \) is \( \Ksigma \) and thick. In order to prove it is co-thick, it is enough to show that \( \mu ( U_n \setminus F ) > 0 \) for each \( n \). Fix \( n \in \omega \): as \( U_{\tilde{n}} \subseteq U_n \), it is enough to show that \( \mu ( U_{\tilde{n}} \cap F ) < \mu ( U_{\tilde{n}} ) \). By construction if \( C_m \cap U_{\tilde{n}} \neq \emptyset \), then \( m \geq n \), and hence \( \mu ( C_m ) \leq 2^{- m - 2 } \mu ( U_{\tilde{n}} ) \) and therefore \( \mu ( F \cap U_{\tilde{n}} ) \leq \mu ( U_{\tilde{n}} ) / 2 \). \end{proof}
Theorem~\ref{thm:thick&cothick} emphasize a difference between measure and category, since in a topological space any nonmeager subset with the Baire property is comeager in some open set.
Working in \( \pre{\omega}{2} \), the function \[
\hat{ \Phi } \colon \MALG \to \boldsymbol{\Pi}^0_3, \quad \hat{ \Phi } ( \eq{A} ) = \Phi ( A ) , \] is Borel-in-the-codes~\cite[][Proposition 3.1]{Andretta:2013uq}, while \( \hat{ \mu } \colon \MALG \to [ 0 ; 1 ] \), \( \hat{ \mu } \eq{A} = \mu ( A ) \), is continuous. The \( \Ksigma \) set \( F \) constructed in Theorem~\ref{thm:thick&cothick} can be of arbitrarily small measure, and hence \( A \cup F \) can be arbitrarily close to any measurable set \( A \). Therefore the map \( \MALG \to \PrTr_2 \), \( \eq{A} \mapsto \boldsymbol{D} ( A ) \), where \( \PrTr_2 \) is the Polish space of all pruned trees on \( \set{0 , 1} \), is not continuous, but it is in \( \mathscr{B}_1 \). To see this apply Lemma~\ref{lem:suptisBaire1} together with the fact that \( \body{\boldsymbol{D} ( A ) } = \Cl_\mu A \) and that the map \( \KK ( \pre{\omega}{2} ) \to \PrTr_2 \), \( K \mapsto T_K \), is continuous. If \( A \) is dualistic, then \( \Phi ( A ) \) and \( \Phi ( A^\complement ) \) are \( \boldsymbol{\Delta}^{0}_{2} \) by Proposition~\ref{prop:solidsetGdelta}. In~\cite[][Section 3.4]{Andretta:2013uq} examples of dualistic, solid, spongy sets are constructed.
For any Polish measure space \( ( X , d , \mu ) \) the set \( \mathscr{ K } ( X ) \) is dense by tightness of \( \mu \), and it is meager by~\cite[Theorem 1.6]{Andretta:2013uq}. (The proof in that paper is stated for \( \pre{\omega }{2} \), but it works in any Polish measure space.)
In a DPP space, if \( C \) is closed and thick in some nonempty open set \( U \), then \( \Phi ( C ) \) is dense in \( U \), and therefore \( C \supseteq U \). Therefore
\begin{lemma}\label{lem:thickcothicknotcompact} In a DPP space \( ( X , d , \mu ) \), if \( A \) is thick and co-thick in some nonempty open set \( U \), then \( \eq{A} \notin \mathscr{ F } ( X , \mu ) \). \end{lemma}
\begin{theorem}\label{thm:KisPi03completeCantor}
\( \mathscr{K} ( \pre{\omega}{2} , \mu^{\mathrm{C}} ) \) is \( \boldsymbol{\Pi}^{0}_{3} \)-complete in \( \MALG \). \end{theorem}
\begin{proof} By Proposition~\ref{thm:setofcompactsinMALG} \( \mathscr{K} \) is \( \boldsymbol{\Pi}^{0}_{3} \), so it is enough to prove \( \boldsymbol{\Pi}^{0}_{3} \)-hardness. We define a continuous \( \hat{f} \colon \pre{ \omega \times \omega }{ 2 } \to \MALG \) witnessing \( \boldsymbol{P}_3 \leq_{\mathrm{W}} \mathscr{K} \), where \[ \boldsymbol{P}_3 = \setof{ z \in \pre{ \omega \times \omega }{ 2 } }{ \FORALL{n} \EXISTS{m} \FORALL{k \geq m} z ( n , k ) = 0 } \] is \( \boldsymbol{\Pi}^0_3 \)-complete~\cite[p.~179]{Kechris:1995kc}. More precisely, set \( \hat{f} ( z ) = \eq{ f ( z ) } \) where \[ f ( z ) = \bigcup_{n} \varphi ( z \mathpunct{\upharpoonright} n \times n ) \] for some suitable function \( \varphi \colon \pre{ < \omega \times \omega }{2} \to \KK ( \pre{\omega }{2} ) \) such that for all \( a \in \pre{ < \omega \times \omega }{2} \) \begin{subequations} \begin{gather}
\Int \varphi ( a ) = \emptyset , \label{eq:thm:KisPi03complete-1} \\ b \subseteq a \mathbin{\, \Rightarrow \,} \varphi ( b ) \subseteq \varphi ( a ) , \label{eq:thm:KisPi03complete-2} \\ a \in \Pre{ ( n + 1 ) \times ( n + 1 ) }{2}\mathbin{\, \Rightarrow \,} \mu^{\mathrm{C}} \left ( \varphi ( a ) \setminus \varphi ( a \mathpunct{\upharpoonright} n \times n ) \right ) \leq 2^{ - ( n + 2 )} . \label{eq:thm:KisPi03complete-3} \end{gather} \end{subequations} For \( a , b \in \pre{ < \omega \times \omega }{2} \) let \( \delta ( a , b ) \) be the largest \( n \) such that \( a \mathpunct{\upharpoonright} n \times n = b \mathpunct{\upharpoonright} n \times n \). Equation~\eqref{eq:thm:KisPi03complete-3} implies that if \( a \in \pre{ n \times n }{2} \) then \( a \subset a' \Rightarrow \mu^{\mathrm{C}} \left ( \varphi ( a' ) \setminus \varphi ( a ) \right ) < 2^{ - ( n + 1 ) } \); thus if \( a , b \in \pre{ < \omega \times \omega }{2} \) are such that \( \delta ( a , b ) = n \), then \( \varphi ( a ) \mathop{\triangle} \varphi ( b ) \subseteq ( \varphi ( a ) \setminus \varphi ( a \mathpunct{\upharpoonright} n \times n ) ) \cup ( \varphi ( b ) \setminus \varphi ( b \mathpunct{\upharpoonright} n \times n ) ) \) and hence \( \mu^{\mathrm{C}} \left ( \varphi ( a ) \mathop{\triangle} \varphi ( b ) \right ) < 2^{ - n } \). Therefore if \( z , w \in \pre{ \omega \times \omega }{2} \) and \( n \) is largest such that \( z \mathpunct{\upharpoonright} n \times n = w \mathpunct{\upharpoonright} n \times n \), then \( \mu^{\mathrm{C}} \left ( f ( z ) \mathop{\triangle} f ( w ) \right ) \leq 2^{ - n } \), and therefore \( \hat{f} \) is continuous. We arrange that \begin{subequations} \begin{align} z \in \boldsymbol{P}_3 & \mathbin{\, \Rightarrow \,} f ( z ) \in \KK ( \pre{\omega }{2} ) \label{eq:thm:KisPi03complete-5} \\ z \notin \boldsymbol{P}_3 & \mathbin{\, \Rightarrow \,} f ( z ) \in \Ksigma ( \pre{\omega }{2} ) \text{ is thick and co-thick in some } {\boldsymbol N}\!_{0^{( j )} {}^\smallfrown 1 } . \label{eq:thm:KisPi03complete-6} \end{align} \end{subequations} By Lemma~\ref{lem:thickcothicknotcompact}, equation~\eqref{eq:thm:KisPi03complete-6} guarantees that if \( z \notin \boldsymbol{P}_3 \) then \( \hat{f} ( z ) \notin \mathscr{K} \), and therefore \( \hat{f} \) witnesses that \( \boldsymbol{P}_3 \leq_{\mathrm{W}} \mathscr{K} \).
Here are the details. Fix \( ( s^j_m )_m \) an enumeration without repetitions of the nodes extending \( 0^{( j )} {}^\smallfrown 1 \), and such that longer nodes are enumerated after shorter ones, that is: \( \lh ( s^j_n ) < \lh ( s^j_m ) \Rightarrow n < m \). \begin{itemize}[leftmargin=1pc] \item Set \( \varphi ( \emptyset ) = \set{ 0^{ ( \omega ) } } \). Then~\eqref{eq:thm:KisPi03complete-1} holds, and~\eqref{eq:thm:KisPi03complete-2} and~\eqref{eq:thm:KisPi03complete-3} do not apply. \item Suppose \( a \in \Pre{ n + 1 }{2} \) and that \( \varphi ( a \mathpunct{\upharpoonright} n \times n ) \) satisfies~\eqref{eq:thm:KisPi03complete-1}--\eqref{eq:thm:KisPi03complete-3}, and let's construct \( \varphi ( a ) \). If \( a ( j , n ) = 0 \) for all \( j \leq n \), then set \( \varphi ( a ) = \varphi ( a \mathpunct{\upharpoonright} n \times n ) \) so that~\eqref{eq:thm:KisPi03complete-1}--\eqref{eq:thm:KisPi03complete-3} are still true. Otherwise, let \( j \leq n \) be least such that \( a ( j , n ) = 1 \). Then by~\eqref{eq:thm:KisPi03complete-1} for \( \varphi ( a \mathpunct{\upharpoonright} n \times n ) \), we can define \( k \) to be the least such that \( \mu \bigl ({\boldsymbol N}\!_{ s^j_k } \cap \varphi ( a \mathpunct{\upharpoonright} n \times n ) \bigr ) = 0 \), and let \( K \subseteq {\boldsymbol N}\!_{ s^j_k } \) be compact with empty interior and such that \begin{equation}\label{eq:thm:KisPi03complete-7} 0 < \mu^{\mathrm{C}} ( K ) \leq \mu^{\mathrm{C}} ( {\boldsymbol N}\!_{ s^j_k } ) / 2^{ - ( n + 2 ) } . \end{equation} Then \( \varphi ( a ) = \varphi ( a \mathpunct{\upharpoonright} n \times n ) \cup K \) satisfies~\eqref{eq:thm:KisPi03complete-1}--\eqref{eq:thm:KisPi03complete-3}. \end{itemize}
The proof is complete once we check that~\eqref{eq:thm:KisPi03complete-5} and~\eqref{eq:thm:KisPi03complete-6} hold. Suppose first \( z \in \boldsymbol{P}_3 \). Then for each \( j \) there is \( N_j \in \omega \) such that \( z ( j , n ) = 1 \Rightarrow n < N_j \), and hence \( {\boldsymbol N}\!_{ 0^{( j )} {}^\smallfrown 1} \cap f ( z ) = {\boldsymbol N}\!_{ 0^{( j )} {}^\smallfrown 1} \cap \varphi ( z \mathpunct{\upharpoonright} N_j \times N_j ) \) is compact, so \( f ( z ) \) is compact. Suppose now \( z \notin \boldsymbol{P}_3 \), and let \( j \) be least such that \( \setof{n}{ z ( j , n ) = 1 } \) is infinite. Then \( f ( z ) \) is thick in \( {\boldsymbol N}\!_{ 0^{( j )} {}^\smallfrown 1 } \): fix \( k \in \omega \), then for \( N \) such that \( \setofLR{ M < N }{ z ( j , M ) = 1 } \) has size at least \( k + 1 \), one has that \( \mu^{\mathrm{C}} ( \varphi ( z \mathpunct{\upharpoonright} N \times N ) \cap {\boldsymbol N}\!_{ s^j_k } ) > 0 \). Moreover \( f ( z ) \) is co-thick in \( {\boldsymbol N}\!_{ 0^{( j )} {}^\smallfrown 1 } \). To see this fix \( k \in \omega \) and let \( N \) be such that \( \setofLR{ M < N }{ z ( j , M ) = 1 } \) has size \( k \), and let \( H = \varphi ( z \mathpunct{\upharpoonright} N \times N ) \cap {\boldsymbol N}\!_{ s^j_k } \). Since \( H \) is closed with empty interior, let \( k' \geq k \) be least with \( s^j_k \subseteq s^j_{k'} \) and \( H \cap {\boldsymbol N}\!_{ s^j_{k'} } = \emptyset \). Then \( \mu^{\mathrm{C}} ( f ( z ) \cap {\boldsymbol N}\!_{ s^j_{k'} } ) < \mu^{\mathrm{C}} ( {\boldsymbol N}\!_{ s^j_{k'} } ) \) by~\eqref{eq:thm:KisPi03complete-7}. \end{proof}
\begin{corollary}\label{cor:KisPi03completeCantor} Let \( ( X , d , \mu ) \) be a Polish measure space such that \( \mu \) is nonsingular. If there is a \( Y \subseteq X \) such that \( 0 < \mu ( Y ) < \infty \), then \( \mathscr{K} ( X , \mu ) \) and \( \mathscr{F} ( X , \mu ) \) are \( \boldsymbol{\Pi}^{0}_{3} \)-hard. \end{corollary}
\begin{proof} We may assume that \( Y \) is \( \Gdelta \). Choose \( r > 0 \) small enough so that Theorem~\ref{thm:embeddingCantorinPolish} can be applied, so that there is an injective continuous \( H \colon \pre{\omega }{2} \to Y \) such that \( r \mu^{\mathrm{C}} ( A ) = \mu ( H [ A ] ) \) for all measurable \( A \subseteq \pre{\omega }{2} \). The map \( H \) induces an embedding between the measure algebras \[ \hat{H} \colon \MALG ( \pre{\omega}{2} , { r \mu^{\mathrm{C}}} ) \to \MALG ( K , \mu ) , \quad \eq{A} \mapsto \eq{ H [ A ] } , \] where \( K = \ran H \). There is a natural embedding \( j \colon \MALG ( K , \mu ) \hookrightarrow \MALG ( X , \mu ) \), sending each \( \eq{A}_K \equalsdef \setof{ B \in \MEAS_\mu \cap \mathscr{P} ( K )}{ B =_\mu A } \) to \( \eq{A}_X \equalsdef \setof{ B \in \MEAS_\mu }{ B =_\mu A } \). Then \( j \circ \hat{H} \) is a reduction witnessing both \( \mathscr{K} ( \pre{\omega }{2} , r \mu^{\mathrm{C}} ) \leq_{\mathrm{W}} \mathscr{ K } ( X , \mu ) \) and \( \mathscr{K} ( \pre{\omega }{2} , r \mu^{\mathrm{C}} ) \leq_{\mathrm{W}} \mathscr{ F } ( X , \mu ) \). For the second reduction, argue as follows: if \( H ( A ) =_\mu F \) for some closed \( F \subseteq X \), then \( H ( A ) =_\mu F \cap K \) hence \( \eq{A} \in \mathscr{ K } ( \pre{\omega}{2} , { r \mu^{\mathrm{C}}} ) \). \end{proof}
By Proposition~\ref{thm:setofcompactsinMALG} and Corollary~\ref{cor:KisPi03completeCantor},
\begin{theorem}\label{thm:KisPi03complete} Let \( ( X , d , \mu ) \) be a Heine-Borel space such that every compact has finite measure, and suppose \( \mu \) is nonsingular. Then \( \mathscr{K} ( X , \mu ) \) and \( \mathscr{F} ( X , \mu ) \) are \( \boldsymbol{\Pi}^{0}_{3} \)-complete. \end{theorem}
\section{The set of exceptional points}\label{sec:exceptionalpoints} \begin{theorem}\label{thm:blurrypointsSigma03} Suppose \( \emptyset \neq A \subseteq {}^{ \omega }2 \) has empty interior, and \( A = \Phi ( A ) \). Then \( \Blur ( A ) \) is \( \boldsymbol{\Sigma}^{0}_{3} \)-complete. \end{theorem}
\begin{proof} For any \( z \in \pre{ \omega \times \omega }{2} \), let \( z' \in \pre{ \omega \times \omega }{2} \) be defined by the conditions \[ \begin{cases}
z' ( 2i , 2j ) = z' ( 2i + 1 , 2j + 1 ) = z ( i , j )
\\
z' ( 2i , 2j + 1 ) = z' ( 2i + 1 , 2j ) = 0 \end{cases} \] for all \( i , j \in \omega \). The function \( \pre{ \omega \times \omega }{2} \to \pre{ \omega \times \omega }{2} \), \( z \mapsto z' \) is continuous.
Recall the tree \( \boldsymbol{D} ( A ) \) defined in~\eqref{eq:densitytree}. Given \( a \in \Pre{ n \times n }{2} \), a node \( \psi ( a ) \in \boldsymbol{D} ( A ) \) is constructed with the property that \[ a \subset b \Rightarrow \psi ( a ) \subset \psi ( b ) \] so that defining \[ f \colon \pre{ \omega \times \omega }{2} \to \body{ \boldsymbol{D} ( A ) } , \quad f ( z ) = \bigcup_{ n \in \omega }\psi ( z' \mathpunct{\upharpoonright} n \times n ) , \] the function \( f \) is continuous and will witness \( \boldsymbol{P}_3^\complement \leq_{\mathrm{W}} \Blur ( A ) \). Define \( I_n \), \( \rho \) as in the proof of~\cite[section 7.1]{Andretta:2013uq}, that is \( I_n = \cointerval{ 1 - 2^{ - n} }{ 1 - 2^{ - n - 1} } \) and \( \rho ( s ) = n \mathbin{\Leftrightarrow } \mu^{\mathrm{C}} ( \LOC{A}{ s} ) \in I_n \).
Let \( \psi ( \emptyset ) = \emptyset \). Given \( a \in \Pre{ ( n + 1 ) \times ( n + 1 ) }{2} \) define \( \psi ( a ) = t \) as follows: \begin{itemize} \item If \( \FORALL{ j \leq n} [ a ( j , n ) = 0 ] \), by \cite[Proposition 3.5]{Andretta:2013uq} let \( t \in \boldsymbol{D} ( A ) \) be a proper extension of \( \psi ( a \mathpunct{\upharpoonright} n \times n ) \) such that \( \rho ( t ) \geq n + 1 \) and \[ \FORALL{u} \left [ \psi ( a \mathpunct{\upharpoonright} n \times n ) \subseteq u \subseteq t \mathbin{\, \Rightarrow \,} \rho ( u ) \geq \rho \left ( \psi ( a \mathpunct{\upharpoonright} n \times n ) \right ) \right ] . \] \item If \( \EXISTS{ j \leq n} [ a ( j , n ) = 1 ] \), let \( j_0 \) be the least such \( j \). By \cite[Proposition 3.5 and Claim 7.0.1]{Andretta:2013uq}, let \( t \in \boldsymbol{D} ( A ) \) be a proper extension of \( \psi ( a \mathpunct{\upharpoonright} n \times n ) \) with \( \rho ( t ) = 2 j_0 \) and \[ \FORALL{u} \left [ \psi ( a \mathpunct{\upharpoonright} n \times n ) \subseteq u \subseteq t \mathbin{\, \Rightarrow \,} \rho ( u ) \geq \min \setLR{ \rho \left ( \psi ( a \mathpunct{\upharpoonright} n \times n ) \right ) , 2 j_0 } \right ] . \] \end{itemize} Suppose \( z \in \boldsymbol{P}_3 \), so that \( z' \in \boldsymbol{P}_3 \) as well. For every \( k \in \omega \) choose \( m_k \in \omega \) such that \( \FORALL{ m \geq m_k} [ z' ( k , m ) = 0 ] \) and let \( M_k = \max \setLR{ m_0 , \ldots , m_k } \). Therefore for every \( n \geq \max \setLR{ k , M_k } \), the least \( j \leq n \) such that \( z' ( j , n ) = 1 \)---if such a \( j \) exists---is larger than \( k \) and thus \( \rho \left (\psi ( z' \mathpunct{\upharpoonright} n \times n ) \right ) > k \). This shows that \( \lim_{i \to \infty }\rho ( f ( z ) \mathpunct{\upharpoonright} i ) = + \infty \) hence \( f ( z ) \in \Phi ( A ) \).
Conversely, suppose \( z \notin \boldsymbol{P}_3 \). Let \( n_0 \) be the least \( n \) such that \( \EXISTSS{ \infty }{ m } z ( n , m ) = 1 \). This means that \( 2n_0 \) is the least \( n \) such that \( \EXISTSS{ \infty }{ m} [ z' ( n , m ) = 1 ] \); moreover, whenever \( z' ( 2n_0 , m ) = 1 \), then \( z' ( 2 n_0 , m + 1 ) = 0 \) and \( z' ( 2n_0 + 1 , m + 1 ) = 1 \). Then there are arbitrarily large values of \( n \) such that \[ \rho \left ( \psi ( z' \mathpunct{\upharpoonright} n \times n ) \right ) = 4 n_0 , \quad \rho \left ( \psi ( z' \mathpunct{\upharpoonright} ( n + 1 ) \times ( n + 1 ) ) \right ) = 4 n_0 + 2 \] hence \( \rho ( f ( z ) \mathpunct{\upharpoonright} i ) = 4 n_0 \) for infinitely many values of \( i \) and \( \rho ( f ( z ) \mathpunct{\upharpoonright} i ) = 4 n_0 + 2 \) for infinitely many values of \( i \). From this it follows that \( f ( z ) \in \Blur (A) \). \end{proof}
In~\cite[Theorems 1.3 and 1.7]{Andretta:2013uq} it is shown that in the Cantor space the set \( \eq{A} \in \MALG \) such that \( A = \Phi ( A ) \) and \( \Int ( A ) = \emptyset \) is comeager in \( \MALG \).
\begin{corollary}\label{cor:blurrycomeager} \( \setof{\eq{A} \in \MALG ( \pre{\omega}{2} ) }{ \Blur ( A ) \text{ is \( \boldsymbol{\Sigma}^{0}_{3} \)-complete}} \) and \( \setof{\eq{A} \in \MALG ( \pre{\omega}{2} ) }{ \Exc ( A ) \text{ is \( \boldsymbol{\Sigma}^{0}_{3} \)-complete}} \) are both comeager in \( \MALG \). \end{corollary}
\begin{theorem}\label{thm:sharppointsPi03} There is a \( K \in \KK ( \pre{\omega }{2} ) \) such that \( \Phi ( K ) \) is open, and \( \Sharp ( K ) \) is \( \boldsymbol{\Pi}^{0}_{3} \)-complete. Moreover for any given \( r \in ( 0 ; 1 ) \) we can arrange that \( \setof{ x \in \pre{\omega}{2} }{ \mathscr{D}_K ( x ) = r } \) is \( \boldsymbol{\Pi}^{0}_{3} \)-complete. \end{theorem}
\begin{proof} We will construct a compact set \( K \subseteq \pre{\omega }{2} \) together with a continuous injective \( f \colon \pre{\omega \times \omega }{2} \to \pre{\omega}{2} \) such that \( \ran f \subseteq \Exc ( K ) \) and \( f \) witnesses that \( \boldsymbol{P}_3 \leq_{\mathrm{W}} \Sharp ( K ) \). The construction is arranged so that \begin{subequations} \begin{align} z \in \boldsymbol{P}_3 & \mathbin{\, \Rightarrow \,} \mathscr{D}_K ( f ( z ) ) = r , \label{eq:th:sharppointsPi03converges} \\ z \notin \boldsymbol{P}_3 & \mathbin{\, \Rightarrow \,} \mathscr{O}_K ( f ( z ) ) > 0 , \label{eq:th:sharppointsPi03oscillates} \end{align} \end{subequations} where \( r \in ( 0 ; 1 ) \) is some fixed value that can be chosen in advance.
We will define a collection \( \tilde{ \mathcal{G} } \subseteq \pre{ < \omega }{2} \) whose elements are called \markdef{good nodes} such that its closure under initial segments \begin{equation}\label{eq:th:sharppointsPi03defT} T = \setof{ t \in \pre{ < \omega }{2} }{ \exists s \in \tilde{ \mathcal{G} } ( t \subseteq s )} \end{equation} is a pruned tree. The set \begin{equation}\label{eq:th:sharppointsPi03defK} K = \body{T} \cup \bigcup_{ s \in \tilde{ \mathcal{G} } } s {}^\smallfrown U_s , \end{equation} where the \( U_s \) are clopen, is compact. We will arrange the construction so that \begin{subequations} \begin{gather}
\mu^{\mathrm{C}} ( \body{T} ) = 0 , \label{eq:th:sharppointsPi03-a} \\ \forall s \in \tilde{ \mathcal{G} } \left ( \body{T} \cap ( s {}^\smallfrown U_s ) = \emptyset \right ) , \label{eq:th:sharppointsPi03-b} \\ \ran f \subseteq \body{T} = \Exc ( K ) . \label{eq:th:sharppointsPi03-c} \end{gather} \end{subequations} Therefore \( \Phi ( K ) = \bigcup_{ s \in \tilde{ \mathcal{G} } } s {}^\smallfrown U_s \) is open.
We define the function \( \rho \colon T \to \omega + 1 \) \begin{equation}\label{eq:th:sharppointsPi03rho} \rho ( t ) = n \mathbin{\, \Leftrightarrow \,} 2^{ - n - 2} \leq \card{ \mu^{\mathrm{C}} ( \LOC{K}{t} ) - r } < 2^{ - n - 1 } , \end{equation} where \( \rho ( t ) = \omega \) just in case \( \mu^{\mathrm{C}} ( \LOC{K}{t} ) = r \). The construction will ensure that \( \rho ( \emptyset ) = 0 \), that is \begin{equation}\label{eq:th:sharppointsPi03measureK}
1 / 4 \leq \card{ \mu^{\mathrm{C}} ( K ) - r } < 1 / 2. \end{equation} We require that any good node \( t \) can be gently extended to a good node \( s \) having any prescribed value of the \( \rho \) function, that is to say: for every \( t \in \tilde{ \mathcal{G} } \) \begin{subequations} \begin{align} m \geq \rho ( t ) & \mathbin{\, \Rightarrow \,} \EXISTS{s \in \tilde{ \mathcal{G} } } \left ( s \supset t \wedge \rho ( s ) = m \wedge \forall u \left ( t \subseteq u \subset s \Rightarrow \rho ( u ) \geq \rho ( t ) \right ) \right ) \label{eq:goingup} \\ m < \rho ( t ) & \mathbin{\, \Rightarrow \,} \EXISTS{s \in \tilde{ \mathcal{G} } } \left ( s \supset t \wedge \rho ( s ) = m \wedge \forall u \left ( t \subseteq u \subset s \Rightarrow \rho ( u ) \geq m \right ) \right ) . \label{eq:goingdown} \end{align} \end{subequations} Assuming all this can be done, we can define the reduction.
\paragraph{\bfseries The construction of \( f \).} For \( a \in \Pre{ n \times n }{2} \) let \( \gamma ( a ) \) be the first row (if it exists) where a \( 1 \) appears in column \( n - 1 \): \[
\gamma ( a ) = \begin{cases}
\text{the least \( j \) such that } a ( j , n - 1 ) = 1 & \text{if } \EXISTS{j < n} \left ( a ( j , n - 1 ) = 1 \right ) ,
\\
n & \text{otherwise.}
\end{cases} \] The function \( f \) is induced by a Lipschitz \( \varphi \colon \pre{ < \omega \times \omega }{2} \to T \); in fact \( \varphi \) will take values in \( \tilde{ \mathcal{G} } \) and will satisfy that \[ \rho ( \varphi ( a ) ) = \gamma ( a ) . \] Here is the definition of \( \varphi \). \begin{itemize}[leftmargin=1pc] \item Set \( \varphi ( \emptyset ) = \emptyset \). Then \( \rho ( \varphi ( \emptyset ) ) = \rho ( \emptyset ) = 0 = \gamma ( \emptyset ) \) by~\eqref{eq:th:sharppointsPi03measureK}. \item Let us define \( \varphi ( a ) \) for \( a \in \Pre{ ( n + 1 ) \times ( n + 1 ) }{2} \), assuming \( \varphi ( a \mathpunct{\upharpoonright} n \times n ) \) has been defined. By~\eqref{eq:goingup} choose a good node \( t \supseteq \varphi ( a \mathpunct{\upharpoonright} n \times n ) \) such that \( \rho ( t ) = n + 1 \) and such that \( \varphi ( a \mathpunct{\upharpoonright} n \times n ) \subseteq u \subset t \Rightarrow \rho ( u ) \geq \gamma ( a \mathpunct{\upharpoonright} n \times n ) = \rho ( \varphi ( a \mathpunct{\upharpoonright} n \times n ) ) \). \begin{description} \item[Case 1] \( \gamma ( a ) = n + 1 \). Then set \( \varphi ( a ) = t \). \item[Case 2] \( \gamma ( a ) \leq n \). Apply~\eqref{eq:goingdown} to get a good node \( s \supset t \) such that \( \rho ( s ) = \gamma ( a ) \) and \( t \subseteq u \subset s \Rightarrow \rho ( u ) \geq \gamma ( a ) \) and set \( \varphi ( a ) = s \). \end{description} \end{itemize} Let us check that the function \( f = f_ \varphi \) is indeed the required reduction.
Suppose \( z \in \boldsymbol{P}_3 \): for all \( j \) there is \( N_j \) such that if \( n \geq N_j \) then \( \forall j' \leq j \left ( z ( j' , n ) = 0 \right ) \), and therefore \( \gamma ( z \mathpunct{\upharpoonright} n \times n ) = \rho ( \varphi ( z \mathpunct{\upharpoonright} n \times n ) ) > j \). Since \[
\forall j \exists N \FORALL{n \geq N} \left ( \rho ( \varphi ( z \mathpunct{\upharpoonright} n \times n ) ) > j \right ) \mathbin{\, \Rightarrow \,} \mathscr{D}_K ( f ( z ) ) = r , \] then \( \mathscr{D}_K ( f ( z ) ) = r \) and \( f ( z ) \in \Sharp ( K ) \). Thus~\eqref{eq:th:sharppointsPi03converges} holds.
Suppose \( z \notin \boldsymbol{P}_3 \): let \( j \) be least such that \( I = \setof{ n \in \omega }{ z ( j , n ) = 1 } \) is infinite. Choose \( N > j \) such that for all \( n \geq N \) if \( j' < j \) then \( z ( j' , n ) = 0 \). Fix \( n' > n > N \) such that \( n - 1 \) and \( n' - 1 \) are consecutive elements of \( I \). Then for \( m \in \setLR{ n , n' } \) \[ 2^{ - j - 2 } \leq \card{ \mu^{\mathrm{C}} ( \LOC{K}{ \varphi ( z \mathpunct{\upharpoonright} m \times m ) } ) - r } < 2^{ - j - 1 } \] while by definition of \( \varphi \) there is \( t \) such that \( \rho ( t ) = n \) and \( \varphi ( z \mathpunct{\upharpoonright} n \times n ) \subset t \subset \varphi ( z \mathpunct{\upharpoonright} n' \times n' ) \). Therefore, as \( n > N > j \) \[ 2^{ - n - 2 } \leq \card{ \mu^{\mathrm{C}} ( \LOC{K}{t} ) - r } < 2^{ - n - 1 } < 2^{ - j - 2 } \] hence \( \mathscr{O} _K ( f ( z ) ) > 0 \) and \( f ( z ) \in \Blur ( K ) \). Thus~\eqref{eq:th:sharppointsPi03oscillates} holds.
Therefore it is enough to construct \( \tilde{\mathcal{G}} \), and hence \( T \) and \( K \), so that~\eqref{eq:th:sharppointsPi03-a}--\eqref{eq:th:sharppointsPi03-c}, \eqref{eq:th:sharppointsPi03measureK}, and \eqref{eq:goingup}--\eqref{eq:goingdown} are satisfied.
\paragraph{\bfseries The construction of \( \tilde{\mathcal{G}} \), \( T \), and \( K \).} Choose \( r_n \in \mathbb{ D } \) such that \begin{equation}\label{eq:th:sharppointsPi03r_n} 2^{ - n - 2 } + 2^{ - n - 4 } \leq \card{ r_n - r } < 2^{ - n - 1 } - 2^{ - n - 4 } . \end{equation} Let \( D_n \) be clopen such that \( \mu^{\mathrm{C}} ( D_ n ) = r_n \), let \( u_n = 0^{ ( n + 6 ) } \) and \( v_n = 1^{ ( n + 6 ) } \), and \[ E_n = \bigcup_{ 0 < i \leq n + 5 } \left ( 0^{ ( i ) } {}^\smallfrown 1 {}^\smallfrown D_n \cup 1^{ ( i ) } {}^\smallfrown 0 {}^\smallfrown D_n \right ) \] Thus \( u_0 \), \( v_0 \), and \( E_0 \) can be visualized as follows (the grey area is \( D_0 \)): \[ \begin{tikzpicture}[scale=0.5] \filldraw (2,-6) circle (2pt) -- (3,-5) circle (2pt) --(4, -4) circle (2pt) --(5, -3) circle (2pt) -- (6 , -2) circle (2pt) -- (7 , -1) circle (2pt) -- (8,0) circle (2pt) -- (9 , -1) circle (2pt) -- (10 , -2) circle (2pt) -- (11 , -3) circle (2pt) -- (12 , -4) circle (2pt) -- (13 , -5) circle (2pt) -- (14 , -6) circle (2pt) ; \node at (7 , -1) [label=165:\( 0 \)]{}; \node at (6 , -2) [label=165:\( 00 \)]{}; \node at (5 , -3) [label=165:\( 000 \)]{}; \node at (4 , -4) [label=165:\( 0000 \)]{}; \node at (3 , -5) [label=165:\( 00000 \)]{}; \node at (2 , -6) [label=180:\( u_0 \)]{}; \node at (9, -1) [label=15:\( 1 \)]{}; \node at (10 , -2) [label=15:\( 11 \)]{}; \node at (11 , -3) [label=15:\( 111 \)]{}; \node at (12 , -4) [label=15:\( 1111 \)]{}; \node at (13, -5) [label=15:\( 11111 \)]{}; \node at (14, -6) [label=0:\( v_0 \)]{}; \fill [top color=gray, bottom color=gray!60] (3,-6)--(2.5, -7)--(3.5, -7)--cycle; \draw (3 , -5)-- (3,-6)--(2.5, -7); \draw (3,-6)--(3.5, -7); \fill [top color=gray, bottom color=gray!60] (4,-5)--(3.5, -6)--(4.5, -6)--cycle; \draw (4 , -4)-- (4,-5)--(3.5, -6); \draw (4,-5)--(4.5, -6); \fill [top color=gray, bottom color=gray!60] (5,-4)--(4.5, -5)--(5.5, -5)--cycle; \draw (5 , -3)-- (5,-4)--(4.5, -5); \draw (5,-4)--(5.5, -5); \fill [top color=gray, bottom color=gray!60] (6,-3)--(5.5, -4)--(6.5, -4)--cycle; \draw (6 , -2)-- (6,-3)--(5.5, -4); \draw (6,-3)--(6.5, -4); \fill [top color=gray, bottom color=gray!60] (7.2, -2)--(7.7 , -3)--(6.7 , -3)--cycle; \draw (7 , -1)--(7.2, -2)--(7.7 , -3); \draw (7.2, -2)--(6.7 , -3); \fill [top color=gray, bottom color=gray!60] (8.8, -2)--(8.3 , -3)--(9.3 , -3)--cycle; \draw (9 , -1)--(8.8, -2)--(9.3 , -3); \draw (8.8, -2)--(8.3 , -3); \fill [top color=gray, bottom color=gray!60] (10,-3)--(9.5, -4)--(10.5, -4)--cycle; \draw (10 , -2)-- (10,-3)--(9.5, -4); \draw (10,-3)--(10.5, -4); \fill [top color=gray, bottom color=gray!60] (11,-4)--(10.5, -5)--(11.5, -5)--cycle; \draw (11 , -3)-- (11,-4)--(10.5, -5); \draw (11,-4)--(11.5, -5); \fill [top color=gray, bottom color=gray!60] (12,-5)--(11.5, -6)--(12.5, -6)--cycle; \draw (12 , -4)-- (12,-5 )--(11.5, -6); \draw (12,-5)--(12.5, -6); \fill [top color=gray, bottom color=gray!60] (13,-6)--(12.5, -7)--(13.5, -7)--cycle; \draw (13 , -5 )-- (13,-6 )--(12.5, -7); \draw (13,-6 )--(13.5, -7 ); \end{tikzpicture} \] Therefore \begin{equation}\label{eq:th:sharppointsPi03-error} \mu^{\mathrm{C}} ( E_n ) = r_n \left ( 1 - 2^{ - n - 5 } \right ) \end{equation} and \begin{equation}\label{eq:th:sharppointsPi03-N_s}
{\boldsymbol N}\!_{u_n} \cap E_n = {\boldsymbol N}\!_{ v_n } \cap E_n = \emptyset . \end{equation} We are now ready to define \( \tilde{ \mathcal{G} } \) and \( T \). Let \[ \begin{split} \Sigma & = \setofLR{ u_n }{ n \in \omega } \cup \setofLR{ v_n }{ n \in \omega \setminus \set{0} } \\
& = \setofLR{0^{( k )} , 1^{( k + 1 ) } }{ k \geq 6 } . \end{split} \] A sequence \( \sigma \in \pre{ < \omega }{ \Sigma } \) is \begin{itemize}[leftmargin=1pc] \item \markdef{ascending} if it is of the form \( \seq{ u_n , u_{n + 1} , \dots , u_{n + k } } \) with \( n , k \geq 0 \), \item \markdef{descending} if it is of the form \( \seq{ v_n , v_{n - 1} , \dots , v_{n - k } } \) with \( n > k \geq 0 \), \item \markdef{good} if either \begin{itemize} \item \( \sigma = \emptyset \), or else \item it is \markdef{positive}, that is a concatenation of an odd number of blocks of ascending and descending sequences, where the ascending and descending sequences alternate: \[ \sigma = \seq{ u_0 , \dots , u_{ n_0 } } {}^\smallfrown \seq{ v_{ n_0 + 1 } , \dots , v_{ n_1 } } {}^\smallfrown \seq{ u_{ n_1 - 1 } , \dots , u_{ n_2 } } {}^\smallfrown \dots {}^\smallfrown \seq{ u_{ n_k - 1} , \dots , u_{ n_{k + 1} } } , \] or else \item
it is \markdef{negative}, that is a concatenation of an even number of blocks of ascending and descending sequences, where the ascending and descending sequences alternate: \[ \sigma = \seq{ u_0 , \dots , u_{ n_0 } } {}^\smallfrown \seq{ v_{ n_0 + 1 } , \dots , v_{ n_1 } } {}^\smallfrown \seq{ u_{ n_1 - 1 } , \dots , u_{ n_2 } } {}^\smallfrown \dots {}^\smallfrown \seq{ v_{ n_k + 1 } , \dots , v_{ n_{k + 1} } } . \] \end{itemize} \end{itemize}
The collection \( \mathcal{G} \) of all good sequences \( \sigma \) is a tree on \( \Sigma \), and can be defined as follows (see Figure~\ref{fig:treeofgoodnodes}): \begin{itemize}[leftmargin=1pc] \item \( \seq{ u_0 } \) is the least nonempty node, \item if a node \( \sigma \) ends with \( u_k \), then its immediate successors are \( \sigma {}^\smallfrown \seq{ u_{k + 1} } \) and \( \sigma {}^\smallfrown \seq{ v_{ k + 1 } } \), \item if the node \( \sigma \) ends with \( v_k \) then: \begin{itemize} \item if \( k > 1 \) there are two immediate successors \( \sigma {}^\smallfrown \seq{ u_{ k - 1} } \) and \( \sigma {}^\smallfrown \seq{v_{k - 1 }} \), \item if \( k = 1 \) then there is a unique immediate successor \( \sigma {}^\smallfrown \seq{ u_0 } \). \end{itemize} \end{itemize} \begin{figure}
\caption{The first few nodes of the tree \( \mathcal{G} \)}
\label{fig:treeofgoodnodes}
\end{figure} Given \( \sigma \in \mathcal{G} \) let \( \tilde{ \sigma } \in \pre{ < \omega }{2} \) be the sequence obtained by concatenating the sequences in \( \sigma \). In other words, if \( \sigma \) is positive as above then \[ \tilde{ \sigma } = \underbracket[0.5pt]{u_0 {}^\smallfrown \dots {}^\smallfrown u_{ n_0 }} {}^\smallfrown \underbracket[0.5pt]{v_{ n_0 + 1 } {}^\smallfrown \dots {}^\smallfrown v_{ n_1 }} {}^\smallfrown \underbracket[0.5pt]{ u_{ n_1 - 1 } {}^\smallfrown \dots {}^\smallfrown u_{ n_2 } } {}^\smallfrown \dots \dots{}^\smallfrown \underbracket[0.5pt]{ u_{ n_k - 1} {}^\smallfrown \dots {}^\smallfrown u_{ n_{k + 1}}} , \] and similarly for negative \( \sigma \). Let \[
\tilde{ \mathcal{G} } = \setofLR{ \tilde{ \sigma } }{ \sigma \in \mathcal{G} } \subseteq \pre{ < \omega}{2} . \] Note that any \( s \in \tilde{ \mathcal{G} } \) determines a unique \( \sigma \in \mathcal{G} \) such that \( s = \tilde{ \sigma } \). Using the same notation as before, let \( \boldsymbol{n} ( s ) \) for \( s \in \tilde{ \mathcal{G} } \) be defined by \[ \boldsymbol{n} ( s ) = \begin{cases} n_{ k + 1 } + 1 & \text{if \( s \) is positive,} \\ n_{ k + 1 } - 1 & \text{if \( s \) is negative,} \\ 0 & \text{if } s = \emptyset . \end{cases} \] A branch of \( \mathcal{G} \) is a sequence \( \seqofLR{ w_n }{ n \in \omega } \) of elements of \( \Sigma \) such that each \( \sigma _n \equalsdef \seq{ w_0 , \dots , w_n } \in \mathcal{G} \), so any branch of \( \mathcal{G} \) yields a branch of \( T \) by letting \begin{equation}\label{eq:branchfrombranch} x = w_0 {}^\smallfrown w_1 {}^\smallfrown \dots = \bigcup_{ n \in \omega } \tilde{ \sigma }_n . \end{equation} Conversely, any \( x \in \body{T} \) yields a branch of \( \mathcal{G} \). A branch \( x \) of \( \body{T} \) is oscillating if \( \setof{n \in \omega }{ \sigma _n \text{ is positive}} \) and \( \setof{n \in \omega }{ \sigma _n \text{ is negative}} \) are both infinite; otherwise \( \sigma _n \) is positive for all sufficiently large \( n \), and \( x \) is said to be positive. Let \[ U_s = E_{ \boldsymbol{n} ( s ) } \] so that the definition of \( K \) as in~\eqref{eq:th:sharppointsPi03defK} is complete.
\paragraph{\bfseries Checking that the construction works.} First of all we check that the function \( \rho \) of~\eqref{eq:th:sharppointsPi03rho} is defined on \( \tilde{\mathcal{G}} \).
\begin{claim}\label{claim:sharppointsPi03}
\( \FORALL{ s \in \tilde{ \mathcal{G} } } \left ( \rho ( s ) = \boldsymbol{n} ( s ) \right ) \). \end{claim}
\begin{proof} Fix \( s \in \tilde{ \mathcal{G} } \) and let \( n = \boldsymbol{n} ( s ) \). Equation~\eqref{eq:th:sharppointsPi03-error} yields that \[ \card{ \mu^{\mathrm{C}} ( \LOC{K}{s} ) - r_n } \leq \card{ \mu^{\mathrm{C}} ( \LOC{K}{s} ) - \mu^{\mathrm{C}} ( E_n ) } + \card{ \mu^{\mathrm{C}} ( E_n ) - r_n } \leq 2^{ - n - 5 } + r_n 2^{ - n - 5 } \leq 2^{ - n - 4 } . \] The triangular inequality and~\eqref{eq:th:sharppointsPi03r_n} imply that \begin{multline*}
2^{ - n - 2 } \leq \card{ r_n - r } - \card{ \mu^{\mathrm{C}} ( \LOC{K}{s} ) - r_n } \leq \card{ \mu^{\mathrm{C}} ( \LOC{K}{s} ) - r } \\ {} \leq \card{ \mu^{\mathrm{C}} ( \LOC{K}{s} ) - r_n } + \card{ r_n - r } < 2^{ - n - 1 } , \end{multline*} which is what we had to prove. \end{proof}
Note that taking \( s = \emptyset \) we obtain that \( 1 / 4 \leq \card{ \mu^{\mathrm{C}} ( K) - r } < 1 /2 \) hence~\eqref{eq:th:sharppointsPi03measureK} holds. Next we check that \( \rho \) is defined on all of \( T \).
Fix \( s \in \tilde{\mathcal{G}} \) and let \( n = \boldsymbol{n} ( s ) \). For \( 0 < k \leq n + 5 \) and \( i \in \set{0 , 1 } \) we have that \[ \LOC{K}{ s {}^\smallfrown i^{( k )}} = i^{( n + 6 - k )} {}^\smallfrown \LOC{K}{ s {}^\smallfrown i^{( n + 6 )}} \cup \bigcup_{0 \leq j \leq n + 5 - k} i^{( j )} {}^\smallfrown ( 1 - i ) {}^\smallfrown D_n \] hence \begin{equation}\label{eq:painful} \mu^{\mathrm{C}} \bigl ( \LOC{K}{ s {}^\smallfrown i^{( k )}} \bigr ) = 2^{- n - 6 + k} \mu^{\mathrm{C}} \bigl ( \LOC{K}{ s {}^\smallfrown i^{( n + 6 )}} \bigr ) + r_n \left ( 1 - 2^{- n - 6 + k} \right ) . \end{equation} Since \( \card{\mu^{\mathrm{C}} \bigl ( \LOC{K}{ s {}^\smallfrown i^{( n + 6 )}} \bigr ) - r } < 1 / 2 \) and \( \card{r_n - r } < 1 / 2 \) by~\eqref{eq:th:sharppointsPi03r_n}, it follows that \( \card{\mu^{\mathrm{C}} \bigl ( \LOC{K}{ s {}^\smallfrown i^{( k )}}\bigr ) -r } < 1 / 2 \). Therefore \( \rho \colon T \to \omega + 1 \) is well-defined.
In order to verify~\eqref{eq:goingup} and~\eqref{eq:goingdown}, it is enough to prove them when \( m = \rho ( t ) + 1 \) and \( m = \rho ( t ) - 1 \), if \( \rho ( t ) \neq 0 \). So fix \( t \in \tilde{\mathcal{G}} \) and let \( n = \boldsymbol{n} ( t ) = \rho ( t ) \). If \( n = 0 \), then either \( t = \emptyset \) or else it ends with \( v_1 \), and therefore it has exactly one immediate successor \( s^+ \) in \( \tilde{\mathcal{G}} \), and \( \rho ( s^+ ) = 1 \). If \( n > 0 \) then it has two immediate successors \( s^+ \) and \( s^- \) in \( \tilde{\mathcal{G}} \), that is \( s^+ = t {}^\smallfrown 0^{ ( n + 6 ) } \) and \( s^- = t {}^\smallfrown 1^{ ( n + 6 ) } \), and \( \rho ( s^+ ) = n + 1 \) and \( \rho ( s^- ) = n - 1 \). We must check that if \( t \subset u \subset s^+ \) then \( \rho ( u ) \geq n \), and that if \( t \subset u \subset s^- \) then \( \rho ( u ) \geq n - 1 \). If \( u = t {}^\smallfrown 0^{( k )} \) then \begin{align*} \card{ \mu^{\mathrm{C}} ( \LOC{K}{ t {}^\smallfrown 0^{( k )} } ) - r } & = \cardLR{ \frac{ \mu^{\mathrm{C}} ( \LOC{K}{s^+} ) }{2^{ n + 6 - k } } + r_n \left ( 1 - \frac{1}{2^{ n + 6 - k } }\right ) - r} && \text{by~\eqref{eq:painful}} \\
& \leq \frac{1}{2^{ n + 6 - k } } \card{\mu^{\mathrm{C}} ( \LOC{K}{s^+} ) - r } + \left ( 1 - \frac{1}{2^{ n + 6 - k } } \right ) \card{ r_n - r }
\\
& < \frac{1}{2^{ n + 6 - k } } 2^{-n - 2} + \left ( 1 - \frac{1}{2^{ n + 6 - k } } \right ) 2^{- n - 1} &&\text{by~\eqref{eq:th:sharppointsPi03r_n}}
\\
& < 2^{-n - 1} , \end{align*} and if \( u = t {}^\smallfrown 1^{( k )} \) with similar computations we obtain \[ \card{ \mu^{\mathrm{C}} ( \LOC{K}{ t {}^\smallfrown 1^{( k )} } ) - r } \leq \frac{1}{2^{ n + 6 - k } } \card{\mu^{\mathrm{C}} ( \LOC{K}{s^-} ) - r } + \left ( 1 - \frac{1}{2^{ n + 6 - k } } \right ) \card{ r_n - r } < 2^{-n} . \] Therefore~\eqref{eq:goingup} and~\eqref{eq:goingdown} hold.
Let us check that~\eqref{eq:th:sharppointsPi03-a}--\eqref{eq:th:sharppointsPi03-c} hold. Equation~\eqref{eq:th:sharppointsPi03-a} follows from the fact that \( \lh ( u_n ) , \lh ( v_n ) \geq 6 \) for all \( n \), equation~\eqref{eq:th:sharppointsPi03-b} follows from~\eqref{eq:th:sharppointsPi03-N_s}, equation~\eqref{eq:th:sharppointsPi03-c} follows by definition of \( \varphi \). \end{proof}
\begin{remark} Corollary~\ref{cor:blurrycomeager}shows that \( \Blur ( A ) \) is \( \boldsymbol{\Sigma}^{0}_{3} \)-complete for \emph{most} \( \eq{A} \) in the measure algebra, while Theorem~\ref{thm:sharppointsPi03} constructs \emph{some specific} compact \( K \) such that \( \Sharp ( K ) \) is \( \boldsymbol{\Pi}^{0}_{3} \)-complete. This asymmetry is to be expected as the proof (and the statement) of Theorem~\ref{thm:sharppointsPi03} hinges on the choice of the value \( r \). \end{remark}
\section{Spongy and solid sets in \( \mathbb{R}^n \)}\label{sec:solid&spongy} In this section we shall construct a spongy subset of \( \mathbb{R} \) (Theorem~\ref{thm:spongy}) and we shall show that a solid subset of \( \mathbb{R}^n \) has always points of density \( 1 / 2 \) (Corollary~\ref{cor:nodualisticsetsinRn}).
\subsection{Spongy sets}\label{subsec:spongy} The goal of this section is to prove the following
\begin{theorem}\label{thm:spongy^n} For each \( n \geq 1 \), there is a bounded spongy set \( S \subseteq \mathbb{R}^n \). Furthermore \( S \) can be taken to be either open or closed. \end{theorem}
The crux of the matter is establishing the result for \( \mathbb{R} \) (Theorem~\ref{thm:spongy}), and
this is achieved by a triadic Cantor-construction of non-shrinking diameter (Section~\ref{subsec:Cantorschemes}) .
\subsubsection{Some notation} Before we jump in the technical details, let us introduce some notation that will be useful in this section.
For \( a \leq b \), \( [ a ; b ] \) denotes either the \emph{closed interval with endpoints \( a , b \)}, when \( a < b \) or else the \emph{singleton} \( \setLR{a} \), when \( a = b \).
Given an interval \( [ a ; b ] \) of length \( \leq 1 \) let \[ \varepsilon < \frac{b - a}{ 3 + 2 M } \leq \frac{1}{ 3 + 2 M } , \] where \( M \) is some number greater that \( 1 \), and let \( \Psi_{ \varepsilon } ( [ a ; b ] ) \) be the set obtained by removing from \( [ a ; b ] \) two open intervals \( ( a + \varepsilon ; a + ( 1 + M ) \varepsilon) \) and \( ( b - ( 1 + M ) \varepsilon ; b - \varepsilon ) \), each of length \( M \varepsilon \), that is \[ \Psi_{ \varepsilon } ( [ a ; b ] ) = [ a ; a + \varepsilon ] \cup [ a + ( 1 + M ) \varepsilon ; b - ( 1 + M ) \varepsilon ] \cup [ b - \varepsilon ; b ] . \] The set \( \Psi_{ \varepsilon } ( [ a ; b ] ) \) has three connected components: two side intervals of length \( \varepsilon \), and a middle interval of length \( b - a - 2 ( 1 + M ) \varepsilon \). By choice of \( \varepsilon \), the middle interval is of length \( > \varepsilon \). Since \( \varepsilon ^2 < \varepsilon / ( 3 + 2 M ) \) and since each of the three intervals has length \( \geq \varepsilon \), we can apply the operation \( \Psi_{ \varepsilon ^2} \) to each of the three intervals obtained so far, obtaining nine closed intervals. This procedure can be iterated: at stage \( n \) we have \( 3^{ n } \) closed intervals, and we apply the operation \( \Psi_{ \varepsilon ^{n + 1}} \) to them. Let \[
H_n ( a , b ) = \Bigl [ a + ( 1 + M ) \sum_{k = 1}^n \varepsilon ^k ; b - ( 1 + M ) \sum_{k = 1}^n \varepsilon ^k \Bigr ] \] be the center-most interval constructed at stage \( n \), i.e. the one containing the point \( ( a + b ) / 2 \). As \( ( 1 + M ) \sum_{k = 1}^\infty \varepsilon ^k = \frac{ ( 1 + M ) \varepsilon }{ 1 - \varepsilon } < \frac{b - a}{2} \), it follows that \begin{equation}\label{eq:connectedcomponent1} \bigcap_{n} H_n ( a , b ) = \Bigl [ a + ( 1 + M ) \sum_{k = 1}^\infty \varepsilon ^k ; b - ( 1 + M ) \sum_{k = 1}^\infty \varepsilon ^k \Bigr ] \end{equation} is a closed interval.
\subsubsection{The construction} Fix \( M > 1 \) and let \( 0 < \varepsilon < \frac{1}{ 3 + 2 M } \). Consider the triadic Cantor-construction obtained by applying the \( \Psi_{ \varepsilon ^{ n + 1 } } \) operations, that is let \[ \seqof{ K_s , I_s^- , I_s^+ }{ s \in \pre{ < \omega}{ \set{ -1 , 0 , 1 } } } \] be a sequence of intervals such that \begin{itemize} \item \( K_s = [ a_s ; b_s ] \) and \( K_\emptyset = [ 0 ; 1 ] \), that is \( a_\emptyset = 0 \) and \( b_\emptyset = 1 \), \item \( I_s^- = ( a_s + \varepsilon^{ \lh ( s ) + 1} ; a_s + ( 1 + M ) \varepsilon^{ \lh ( s ) + 1} ) \) and \( I_s^+ = ( b_s - ( 1 + M ) \varepsilon^{ \lh ( s ) + 1} ; b_s - \varepsilon^{ \lh ( s ) + 1} ) \). \end{itemize} Figure~\ref{fig:spongy} may help to visualize the construction. \begin{figure}\label{fig:spongy}
\end{figure} Following the notation in Section~\ref{subsec:Cantorschemes}, let \begin{align*}
K^{( n )} & = \bigcup_{s \in \pre{ n }{ \set{ -1 , 0 , 1 } } } K_s \\
K = \bigcap_{n \in \omega } K^{( n )} &= \bigcup_{z \in \pre{ \omega }{\setLR{ - 1 , 0 , 1 }}} \bigcap_{n \in \omega } K_{ z \mathpunct{\upharpoonright} n } . \end{align*} By induction on \( \lh s \), one checks that \( \card{ K_s } \geq \varepsilon ^{\lh s} \) and \( \varepsilon ^{ \lh ( s ) + 1 } < \card{ K_s } / ( 3 + 2 M ) \), and if \( \lh s > 0 \) then
\begin{equation}\label{eq:connectedcomponent2} s ( \lh ( s ) - 1 ) \in \setLR{ -1 , 1 } \mathbin{\, \Leftrightarrow \,} \card{ K_s } = \varepsilon ^{\lh s} . \end{equation} Recall that the connected components of \( K \) are the sets \[ \bigcap_{n \in \omega } K_{ z \mathpunct{\upharpoonright} n} = [ a_z ; b_z ] \] where \( a_z = \sup_{n \to \infty} a _{z \mathpunct{\upharpoonright} n} \) and \( b_z = \inf_{n \to \infty} b _{z \mathpunct{\upharpoonright} n} \). By~\eqref{eq:connectedcomponent1} and~\eqref{eq:connectedcomponent2} \( a_z < b_z \mathbin{\, \Leftrightarrow \,} z \in F \), where \[ F = \setofLR{ z \in \pre{ \omega }{\setLR{ - 1 , 0 , 1 }} }{ \EXISTS{n} \FORALL{m \geq n } ( z ( n ) = 0 ) } . \] Therefore \( \Int ( K ) = \bigcup \setofLR{ ( a_z ; b_z )}{ z \in F } \) and \( \lambda ( K ) > 0 \).
Let \( s \in \pre{ < \omega }{ \setLR{ - 1 , 0 , 1 }} \). By induction on \( \lh s \), it can be checked that \begin{equation}\label{eq:spongydisjoint} \left ( ( a_s - M \varepsilon ^{ \lh s } ; a_s ) \cup ( b_s ; b_s + M \varepsilon ^{ \lh s } ) \right ) \cap K^{ ( \lh s ) } = \emptyset , \end{equation} hence \( ( a_s - M \varepsilon ^{ \lh s } ; a_s ) \cup ( b_s ; b_s + M \varepsilon ^{ \lh s } ) \) is disjoint from \( K \), and that \begin{equation}\label{eq:measurespongy} \begin{split} \lambda ( K_s \cap K ) & = \card{ K_s } - 2 M \sum_{ i = 0 }^{ \infty } 3^i \varepsilon^{ \lh ( s ) + i + 1 } \\
& = \card{ K_s } - \frac{ 2 M \varepsilon^{ \lh ( s ) + 1 }}{ 1 - 3 \varepsilon } . \end{split} \end{equation} Clearly \( K = K ( M , \varepsilon ) \subseteq [ 0 ; 1 ] \) is compact, and depends on \( M \) and \( \varepsilon \). Note that the construction above requires that \( \varepsilon < \frac{1}{ 3 + 2 M } \). If this requirement is strengthened by imposing that \[
0 < \varepsilon < \varepsilon _0 \equalsdef \frac{ M - 1 }{ M ( 3 + 2 M ) - 3 } , \] a spongy set is obtained.
\begin{theorem}\label{thm:spongy} \( \FORALL{ M > 1} \FORALL{ \varepsilon \in ( 0 ; \varepsilon _0 ) } \) the sets \( K ( M , \varepsilon ) \) and \( \Int \left ( K ( M , \varepsilon ) \right ) \) are spongy. \end{theorem}
\begin{proof} We are going to show that for \( M >1 \) and \( \varepsilon < \varepsilon _0 \) \begin{equation*} \FORALL{z \in \pre{ \omega }{\setLR{ - 1 , 0 , 1 }} } \left ( \mathscr{O}_K ( a_z ) , \mathscr{O}_K ( b_z ) > 0 \right ) . \end{equation*} Therefore \( \mathscr{O}_K ( x ) > 0 \) for all \( x \in K \setminus \Int ( K ) = \setofLR{a_z , b_z}{ z \in \pre{ \omega }{\setLR{ - 1 , 0 , 1 }} } \), thus \( K \) is spongy and closed. Since \( \Fr ( K ) = \Exc ( K ) \), by the Lebesgue density theorem \( K =_\mu \Int ( K ) \), so \( \Int ( K ) \) is spongy and open.
The idea behind the proof is an elaboration of the argument used in Examples~\ref{xmp:densitybutnoleftorrightdensities} and~\ref{xmp:oscillatingdensity}.
Let \( x \in K_{ s {}^\smallfrown \seq{ -1 } } \). By~\eqref{eq:spongydisjoint} we have (see Figure~\ref{fig:spongy}): \begin{equation}\label{eq:spongycontained} ( x - \varepsilon ^{ \lh ( s ) + 1} ; x + \varepsilon ^{ \lh ( s ) + 1} ) \cap K \subseteq ( x - M \varepsilon ^{ \lh ( s ) + 1} ; x + M \varepsilon ^{ \lh ( s ) + 1 } ) \cap K \subseteq K_{ s {}^\smallfrown \seq{ -1 } } \end{equation} hence \begin{equation}\label{eq:th:spongy-a} \begin{aligned} \frac{\lambda ( ( x - M \varepsilon ^{ \lh ( s ) + 1} ; x + M \varepsilon ^{ \lh ( s ) + 1 } ) \cap K )}{ 2 M \varepsilon ^{ \lh ( s ) + 1}} & < \frac{ \card{ K_{ s {}^\smallfrown \seq{ -1 } } } }{ 2 M \varepsilon ^{ \lh ( s ) + 1}} \\
& = \frac{1}{ 2 M } & \text{by~\eqref{eq:connectedcomponent2}} \end{aligned} \end{equation} and by~\eqref{eq:measurespongy} with \( s {}^\smallfrown \seq{ -1 } \) in place of \( s \), \begin{equation}\label{eq:th:spongy-b} \begin{split} \frac{ \lambda ( ( x - \varepsilon ^{ \lh ( s ) + 1} ; x + \varepsilon ^{ \lh ( s ) + 1} ) \cap K ) }{ 2 \varepsilon ^{ \lh ( s ) + 1}} & = \frac{1}{ 2 \varepsilon ^{ \lh ( s ) + 1}} \Bigl [ \varepsilon ^{ \lh ( s ) + 1} - \frac{ 2 M \varepsilon ^{ \lh ( s ) + 2} }{1 - 3 \varepsilon } \Bigr ]
\\ & = \frac{1 - ( 3 + 2 M ) \varepsilon }{2 - 6 \varepsilon } \\ & \equalsdef f ( M , \varepsilon ). \end{split} \end{equation} Note that for fixed \( M \) we have that \( \lim_{ \varepsilon {\downarrow} 0} f ( M , \varepsilon ) = \frac{1}{2}\), and since \( M > 1 \) and \( \varepsilon < \varepsilon_0 \), then \begin{equation*} f ( M , \varepsilon ) > \frac{1 }{ 2 M } . \end{equation*} Therefore if \( z \in \pre{ \omega }{ \setLR{ - 1 , 0 , 1 }} \) has infinitely many \( -1 \), then letting \( s = z \mathpunct{\upharpoonright} n \) with \( z ( n ) = - 1 \), it follows that \( a_z = b_z \in K_{ s {}^\smallfrown \seq{ -1 } } \), so~\eqref{eq:th:spongy-a} implies that \begin{equation}\label{eq:D-(az)} \mathscr{D}^-_K ( a_z ) < \frac{1}{2M} \end{equation} and since \( \varepsilon < \varepsilon_0 \), then~\eqref{eq:th:spongy-b} implies that \begin{equation}\label{eq:D+(az)} \mathscr{D}^+_K ( a_z ) \geq f ( M , \varepsilon ) . \end{equation} Thus \( \mathscr{O}_K ( a_z ) > 0 \). A similar argument applies to the case when \( z \) has infinitely many \( 1 \).
Suppose now \( z \in F \), and let \( s \) be any large enough initial segment of \( z \) so that \( z = s {}^\smallfrown 0^{ ( \omega )} \). Then \( a_z \) and \( b_z \) are the endpoints of the closed interval \( \bigcap_{n} [ a_{ s {}^\smallfrown 0^{( n ) }} ; b_{ s {}^\smallfrown 0^{( n ) }} ] \). We only show that \( \mathscr{O}_K ( b_z ) > 0 \), the argument for \( \mathscr{O}_K ( a_z ) > 0 \) being similar. Since \( ( b_z - r ; b_z ) \subseteq K \) for sufficiently small \( r \), it is enough to prove that \( \mathscr{D}_K^+ ( b_z^+ ) > \mathscr{D}_K^- ( b_z^+ ) \). For ease of notation, set \[ g ( x ) = \frac{ \lambda ( K \cap ( b_z ; x ) ) }{ \card{ b_z - x } }, \quad \text{for }x > b_z . \] We will show (see~\eqref{eq:th:spongy-f} below) that for any \( s \) as above, the numbers \( g ( a_{ s {}^\smallfrown \seq{ 1 } } ) \) and \( g ( b_s ) \) are sufficiently far apart so that \( \mathscr{D}_K^+ ( b_z^+ ) > \mathscr{D}_K^- ( b_z^+ ) \) holds.
Recall that \( b_z = \inf_n b_{ s {}^\smallfrown 0^{( n )} } = \inf_n a_{ s {}^\smallfrown 0^{( n )}{}^\smallfrown \seq{1} } \) and \[ b_{ s {}^\smallfrown 0^{( n + 1 )} } < a_{ s {}^\smallfrown 0^{( n )} {}^\smallfrown \seq{1 } } < b_{ s {}^\smallfrown 0^{( n )} {}^\smallfrown \seq{1 } } = b_{ s {}^\smallfrown 0^{( n )} } \] as summarized by Figure~\ref{fig:spongy2}. \begin{figure}\label{fig:spongy2}
\end{figure} \begin{subequations}\label{eq:subequations} \begin{gather}
a_{ s {}^\smallfrown 0^{( n ) } {}^\smallfrown \seq{ 1 }} = b_{ s {}^\smallfrown 0^{( n ) } } - \varepsilon ^{ \lh ( s ) + n + 1 } \label{eq:as1>bs}
\\
b_{ s {}^\smallfrown 0^{ ( n + 1 ) }} = b_{ s {}^\smallfrown 0^{ ( n ) }} - ( 1 + M ) \varepsilon^{\lh ( s ) + n + 1 } \label{eq:as1>bs2}
\\
b_{z} = b_{ s} - \frac{( 1 + M ) \varepsilon^{\lh ( s ) + 1 } }{ 1 - \varepsilon } . \label{eq:bz>bs} \end{gather} \end{subequations} Since \( K \cap \ocinterval{b_z}{b_s} \subseteq \bigcup_{n \in \omega } [ a_{ s {}^\smallfrown 0^{( n ) } {}^\smallfrown \seq{ 1 } } ; b_{ s {}^\smallfrown 0^{( n ) } {}^\smallfrown \seq{ 1 }} ] = \bigcup_{ n \in \omega } K_{ s {}^\smallfrown 0^{( n ) } {}^\smallfrown \seq{ 1 } } \), then \begin{align*} g ( b_s ) & = \frac{ 1 - \varepsilon }{( 1 + M ) \varepsilon^{\lh ( s ) + 1 }} \sum_{n = 0}^\infty \lambda ( K \cap K_{ s {}^\smallfrown 0^{( n ) } {}^\smallfrown \seq{ 1 } } ) &\text{by~\eqref{eq:bz>bs}} \\ & = \frac{ 1 - \varepsilon } {( 1 + M ) \varepsilon^{\lh ( s ) + 1 } }\sum_{n = 0}^\infty \bigl [ \card{ K_{ s {}^\smallfrown 0^{( n ) } {}^\smallfrown \seq{ 1 } } } - \frac{ 2 M \varepsilon^{ \lh ( s ) + n + 2 }}{ 1 - 3 \varepsilon } \bigr ] &\text{by~\eqref{eq:measurespongy}} \\ & = \frac{ 1 - \varepsilon } {( 1 + M ) \varepsilon^{\lh ( s ) + 1 } }\sum_{n = 0}^\infty \bigl [ \varepsilon^{\lh ( s ) + n + 1 } - \frac{ 2 M \varepsilon^{ \lh ( s ) + n + 2 }}{ 1 - 3 \varepsilon } \bigr ] &\text{by~\eqref{eq:connectedcomponent2}} \\ & = \frac{ 1 - \varepsilon } {( 1 + M ) }\sum_{n = 0}^\infty \bigl [ 1- \frac{ 2 M \varepsilon }{ 1 - 3 \varepsilon } \bigr ] \varepsilon^{ n } \\ & = \frac{ 1 - \varepsilon ( 3 + 2 M ) }{ ( 1 + M ) ( 1 - 3 \varepsilon ) }. \end{align*} For fixed \( M \), the map \( \varepsilon \mapsto \frac{ 1 - \varepsilon ( 3 + 2 M ) }{ ( 1 + M ) ( 1 - 3 \varepsilon ) } \) is decreasing, and since \( \varepsilon < \varepsilon_0 \), \begin{equation*} g ( b_s ) > \frac{ 1 }{ M ( 1 + M ) } . \end{equation*} By the equations~\eqref{eq:subequations}, \begin{equation}\label{eq:th:spongy-d}
\frac{ \card{ b_z - b_{s {}^\smallfrown \seq{ 0 } } } }{ \card{ b_z - a _{ s {}^\smallfrown \seq{ 1 }} } } = \frac{ ( 1 + M ) \varepsilon / ( 1 - \varepsilon ) }{ M + ( 1 + M ) \varepsilon / ( 1 - \varepsilon ) } . \end{equation}
As \( K \cap ( b_{ s {}^\smallfrown \seq{ 0 } } ; a_{ s {}^\smallfrown \seq{ 1 } } ) = \emptyset \), then
\begin{align*} g ( a_{ s {}^\smallfrown \seq{ 1 }} ) & = \frac{ \lambda ( K \cap \ocinterval{ b_z }{ b_{s {}^\smallfrown \seq{ 0 } } } ) }{ \card{ b_z - a_{ s {}^\smallfrown \seq{ 1 } } } } \\ & < \frac{ \card{ b_z - b_{s {}^\smallfrown \seq{ 0 } } } }{ ( 1 + M ) \card{ b_z - a _{ s {}^\smallfrown \seq{ 1 }} } } && \\ & = \frac{ \varepsilon / ( 1 - \varepsilon ) }{ M + ( 1 + M ) \varepsilon / ( 1 - \varepsilon ) } && \text{by~\eqref{eq:th:spongy-d}} \\ &= \frac{ \varepsilon }{ M + \varepsilon } . \end{align*}
For fixed \( M \) the map \( \varepsilon \mapsto \frac{ \varepsilon }{ M + \varepsilon } \) is increasing, and since \( \varepsilon < \varepsilon_0 \), then \begin{equation}\label{eq:th:spongy-f} g ( a_{ s {}^\smallfrown \seq{ 1 }} ) < \frac{M - 1}{ 2 M^3 + 3M^2 - 2M - 1} < \frac{1}{M ( M + 1 )} < g ( b_s ) . \end{equation} Therefore \( \mathscr{D}_K^- ( b_z ^+ ) < \mathscr{D}_K^+ ( b_z ^+ ) \) as required. \end{proof}
\begin{remarks}\label{rmk:spongysubset} \begin{enumerate-(a)} \item\label{rmk:spongysubset-a} Since \( 0 = a_{ -1 ^{ ( \omega ) } } = a_\emptyset \) and \( 1 = b_{ 1^{( \omega )} } = b_\emptyset \), equations~\eqref{eq:D-(az)} and~\eqref{eq:D+(az)} imply that \( \mathscr{O}_S ( 0 ) , \mathscr{O}_S ( 1 ) > 0 \), where \( S = K \) or \( S = \Int K \). \item\label{rmk:spongysubset-b} Choosing suitable \( M \) and \( \varepsilon \), a spongy set \( S \subseteq \mathbb{R} \) is obtained so that \( S \times \mathbb{R} \) is spongy in \( \mathbb{R}^2 \). This result will appear elsewhere. \end{enumerate-(a)} \end{remarks}
\begin{corollary} For every \( m \in ( 0 ; 1 ) \) there is a spongy set \( X \subset [ 0 ; 1 ] \) such that \( \inf X = 0 \), \( \sup X = 1 \), and \( \lambda ( X ) = m \). Moreover \( X \) can be taken to be open or closed. Furthermore we can arrange the construction so that \( 0 < \mathscr{O}_X ( 0 ) , \mathscr{O}_X ( 1 ) \) or \( \mathscr{O}_X ( 0 ) = \mathscr{O}_X ( 1 ) = 0 \). \end{corollary}
\begin{proof} Let \( S \) be an open, spongy set as in Theorem~\ref{thm:spongy} and let \( 0 < M = \lambda ( S ) < 1 \). By Remark~\ref{rmk:spongysubset}\ref{rmk:spongysubset-a}, \( 0 < \mathscr{O}_S ( 0 ) , \mathscr{O}_S ( 1 ) \). We first prove the existence of an open spongy set \( X \) of measure \( m \) and such that \( \mathscr{O}_X ( i ) = \mathscr{O}_S ( i ) \) for \( i = 0 , 1 \). The affine map \( [ 0 ; 1 ] \to [ a ; b ] \), \( x \mapsto a + ( b - a ) x \), preserves densities, thus the image of \( S \) under this map, call it \( S_{a , b} \), is a spongy subset of \( [ a ; b ] \) such that \( \mathscr{O}_{ S_{a,b} } ( a ) = \mathscr{O}_S ( 0 ) \) and \( \mathscr{O}_{ S_{a,b} } ( b ) = \mathscr{O}_S ( 1 ) \), and \( \lambda ( S_{a,b} ) = ( b - a ) M \). For each \( 0 < \alpha < 1 / 2 \) the sets \( X^- ( \alpha ) = S_{ 0 , \alpha } \cup S_{ 1 - \alpha , 1 } \) and \( X^+ ( \alpha ) = X^- ( \alpha ) \cup \left ( \alpha ; 1 - \alpha \right ) \) are open, spongy, and have measure \( 2 M \alpha \) and \( 1 - 2 \alpha ( 1 - M ) \), respectively, and therefore for each \( m \in ( 0 ; 1 ) \) there is an open \( X \) as in the statement. The requirement ``\( X \) closed'' can be fulfilled by starting with a closed \( S \) and using \( [ \alpha ; 1 - \alpha ] \) in the definition of \( X^+ ( \alpha ) \).
Let us now show how to modify the construction in order to attain \( \mathscr{O}_X ( 0 ) = \mathscr{O}_X ( 1 ) = 0 \). Choose \( \varepsilon _n {\downarrow} 0 \) be such that \( \varepsilon _0 \leq 1/2 \) and let \( X_n^0 \subseteq ( \varepsilon _{ 2 n + 1} ; \varepsilon _{2 n} ) \) and \( X_n ^1 \subseteq ( 1 - \varepsilon _{ 2n } ; 1 - \varepsilon _{ 2n + 1 } ) \) be spongy sets such that \( \lambda ( X^i_n ) / ( \varepsilon _{ 2 n } - \varepsilon _{ 2 n + 1 } ) \leq 2^{ - n } \), for \( i = 0 , 1 \). Then \( X = \bigcup_{n \in \omega } X^0_n \cup X^1_n \) is spongy and \( \mathscr{O}_X ( 0 ) = \mathscr{O}_X ( 1 ) = 0 \). \end{proof}
\subsection{Solid sets} Balls in \( \mathbb{R}^n \) are typical examples of solid sets. A ball in \( \mathbb{R} \) of center \( x \) and radius \( r \) is just the interval \( ( x - r ; x + r ) \) and the points of its frontier \( \set{x - r , x + r } \) have density \( 1 / 2 \). The same is true for \( B_2 = \setof{ \mathbf{y} \in \mathbb{R}^{ n + 1 } }{ \absval{ \mathbf{y} - \mathbf{x} }_2 < r } \), the ball in \( \mathbb{R}^{n + 1} \) with center \( \mathbf{x} \) and radius \( r \): its frontier is the \( n \)-dimensional sphere \( S_2 = \setof{ \mathbf{y} }{ \absval{ \mathbf{y} - \mathbf{x} }_2 = r } \) which, being a differentiable manifold, can be smoothly approximated with a hyperplane at every point, and therefore \( \mathscr{D}_{B_2} ( \mathbf{y} ) = 1 / 2 \) for all \( \mathbf{y} \in S_2 \). The index \( 2 \) refers to the fact that we used the \( \ell_2 \)-norm, but a similar argument works for the \( \ell_p \)-norm, with \( 1 < p < + \infty \). When \( p \in \set{ 1 , + \infty } \) the ball \( B_p \) is still solid, but \( S_p \) is no longer smooth, and we get the weaker result that \( \mathscr{D}_{B_p} ( \mathbf{y} ) = 1 / 2 \) for comeager many (in fact: all but finitely many) \( \mathbf{y} \in S_p \).
\begin{definition}\label{def:quasiEuclidean} A Polish measure space \( ( X , d , \mu ) \) is \markdef{quasi-Euclidean} if it is locally compact, connected, \( \mu \) is continuous, fully supported, locally finite and satisfies the DPP. \end{definition}
Thus \( \mathbb{R}^n \) with the \( \ell_p \)-metric (\( 1 \leq p \leq \infty \)) and the \( n \)-th dimensional Lebesgue measure is quasi-Euclidean. Note that all \( \ell_p \) metrics on \( \mathbb{R}^n \) are equivalent.
\begin{theorem}\label{thm:solid} Suppose \( ( X , d , \mu ) \) is quasi-Euclidean and that \( A \subseteq X \) is nontrivial and solid. Suppose \( d' \) is an equivalent metric such that every \( \Ball' ( x ; r ) = \setofLR{ z \in X}{ d' ( z , x ) < r } \) is solid and there is a \( \rho \in ( 0 ; 1 ) \) such that \[
\FORALL{ x , y \in X } \FORALL{ r > 0 } \left [ d' ( y , x ) = r \Rightarrow \mathscr{D}_{ \Ball' ( x ; r ) } ( y ) = \rho \right ] . \] Then \begin{enumerate-(a)} \item\label{thm:solid-a} \( \Fr_\mu ( A ) \) is closed and nonempty, \item\label{thm:solid-b} \( \setofLR{ x \in X }{ \mathscr{D}_A ( x ) = \rho } \) is a dense subset of \( \Fr_\mu ( A ) \), and \item\label{thm:solid-c} \( \rho = 1 / 2 \). \end{enumerate-(a)} \end{theorem}
\begin{remark} The density function \( \mathscr{D} \) in Theorem~\ref{thm:solid} refers to the metric \( d \), not to \( d' \). \end{remark}
By Proposition~\ref{prop:solidsetBaireclassDensity} \( \setofLR{ x \in X }{ \mathscr{D}_A ( x ) = \rho } \) is \( \Gdelta \) for any \( \rho \), so
\begin{corollary}\label{cor:solid} If \( X , d , d' , \mu , A \) are as in Theorem~\ref{thm:solid}, then \( \setofLR{ x \in X }{ \mathscr{D}_A ( x ) = 1 / 2 } \) is \( \Gdelta \) dense in \( \Fr_\mu ( A ) \), and \( \setofLR{ x \in X }{ \mathscr{D}_A ( x ) = \rho } \) is not dense in \( \Fr_\mu ( A ) \) for any \( \rho \in ( 0 ; 1 ) \setminus \set{ 1 / 2 } \). \end{corollary}
\begin{corollary}\label{cor:nodualisticsetsinRn} Work in \( \mathbb{R}^n \) with the \( \ell_p \) metric \( ( 1 \leq p \leq \infty ) \) and the Lebesgue measure. If \( A \subseteq \mathbb{R}^n \) is nontrivial and solid, then \( \mathscr{D}_A ( \mathbf{x} ) = \frac{1}{2} \) for comeager many \( \mathbf{x} \in \Fr_\mu ( A ) \).
In particular, there are no nontrivial dualistic sets. \end{corollary}
\begin{proof}[Proof of Theorem~\ref{thm:solid}] \ref{thm:solid-a} follows from the fact that \( A \) is nontrivial and \( X \) is connected. For the sake of notation let \( F = \Fr_\mu ( A ) \).
The crux of the matter is the proof of~\ref{thm:solid-b}. Towards a contradiction, suppose that \( \mathscr{D}_A ( x ) \neq \rho \) for all \( x \in U \cap F \), where \( U \) is open in \( X \) and \( U \cap F \neq \emptyset \). Then the sets \begin{align*} F^ + & = \setofLR{ x \in F \cap U }{ \mathscr{D}_A ( x ) > \rho } \\ F^- & = \setofLR{ x \in F \cap U }{ \mathscr{D}_A ( x ) < \rho } \end{align*} partition \( F \cap U \). Since \[ F^ + = \bigcup_{m , k} F^ + _{m, k } \qquad\text{and}\qquad F^- = \bigcup_{m, k} F^-_{m, k} \] where \begin{align*} F^ + _{m, k} & = \setofLR{ x \in F \cap U }{ \FORALL{n > m} \Bigl [ \frac{ \mu ( \Ball ( x ; 1 / n ) \cap A ) }{ \mu ( \Ball ( x ; 1 / n ) )} \geq \rho + 2^{-k} \Bigr ] } \\ F^-_{m,k} & = \setofLR{ x \in F \cap U }{ \FORALL{n > m} \Bigl [ \frac{ \mu ( \Ball ( x ; 1 / n ) \cap A ) }{ \mu ( \Ball ( x ; 1 / n ) )} \leq \rho - 2^{-k} \Bigr ] } , \end{align*} by the continuity property of the measure in the Definition~\ref{def:quasiEuclidean}, the sets \( F^ + _{m, k} \) and \( F^-_{m, k} \) are closed in \( F \cap U \), and hence both \( F^ + \) and \( F^- \) are \( \boldsymbol{\Sigma}^{0}_{2} \), and therefore are \( \boldsymbol{\Delta}^{0}_{2} \). If we show that both \( F^ + \) and \( F^- \) are dense in \( F \cap U \), a contradiction follows applying the Baire category theorem.
Fix \( x \in F \cap U \) towards proving that \( x \in \Cl ( F^+ ) \cap \Cl ( F^- ) \). The solidity of \( A \) together with Lemma~\ref{lem:useless} yield that \( \Int ( F ) = \emptyset \).
\begin{claim} If \( x \in \Fr ( \Int_\mu A ) \) then \( x \in \Cl ( F^+ ) \). \end{claim}
\begin{proof} Towards a contradiction, choose \( \delta \) such that \( \Ball' ( x ; \delta ) \) is compact and such that \begin{equation}\label{eq:antiSzenes-absurd2}
\Ball ' ( x ; \delta ) \cap F^ + = \emptyset . \end{equation} Pick \( y \in \Int_\mu ( A ) \cap \Ball ' ( x ; \delta / 2 ) \). By compactness there is \( w \in \Ball' ( x ; \delta ) \setminus \Int_\mu ( A ) \) such that \[ 0 < r = d ' ( y , X \setminus \Int_\mu ( A ) ) = d ' ( y , w ) < \delta / 2 . \] Since \( d' ( x , w ) \leq d' ( x , y ) + d ' ( y , w ) < \delta /2 + r < \delta \), then \( w \in \Ball ' ( x ; \delta ) \), so \( w \in \Int_\mu ( A^ \complement ) \cup F^- \) by~\eqref{eq:antiSzenes-absurd2}, and therefore \( \mathscr{D}_A ( w ) < \rho \). Moreover, \( z \in \Ball' ( y ; r ) \Rightarrow z \in \Int_\mu ( A ) \); so \( \Ball' ( y ; r ) \subseteq \Int_\mu ( A ) \). By assumption \( \mathscr{D}_{ \Ball' ( y ; r ) } ( w ) = \rho \) hence \( \mathscr{D}^-_{\Int_\mu ( A )} ( w ) \geq \rho . \) Since \( \Int_\mu ( A ) \subseteq \Phi( A ) =_\mu A \), then \( \mathscr{D}^-_{\Int_\mu ( A )} ( w ) \leq \mathscr{D}_A ( w ) \), contradicting the preceding calculations. \end{proof}
Similarly, if \( x \in \Fr ( \Int_\mu A^\complement ) \) then \( x \in \Cl ( F^- ) \). Therefore if \( x \in \Fr ( \Int_\mu A ) \cap \Fr ( \Int_\mu A^\complement ) \) then \( x \in \Cl ( F^+ ) \cap \Cl ( F^- ) \), as required.
\begin{claim} If \( x \notin \Fr \Int_\mu ( A^\complement ) \) then \( x \in \Cl ( F^- ) \). \end{claim}
\begin{proof} Fix \( \gamma \) sufficiently small such that \( \Ball ( x ; \gamma ) \cap \Int_\mu ( A^ \complement ) = \emptyset \). Since \( x \notin \Int_\mu ( A ) \), then \( \mu ( \Ball ( x ; \gamma ) \cap A ) < \mu ( \Ball ( x ; \gamma ) ) \) so by DPP there is \( y \in \Ball ( x ; \gamma ) \) such that \( \mathscr{D}_A ( y ) = 0 \), and therefore \( y \in F^- \). \end{proof}
Similarly if \( x \notin \Fr \Int_\mu ( A ) \) then \( x \in \Cl ( F^+ ) \).
Therefore we have shown that if \( x \in F \cap U \) then \( x \in \Cl ( F^+ ) \cap \Cl ( F^- ) \). This concludes the proof of part~\ref{thm:solid-b} of the theorem.
Now we argue for part~\ref{thm:solid-c}. Fix \( y \in X \) and \( r > 0 \), and \( A = \Ball' ( y ; r ) ^ \complement \), so by part~\ref{thm:solid-b} there is \( x_0 \in X \) such that \( \mathscr{D}_A ( x_0 ) = \rho \). On the other hand \( \mathscr{D}_A ( x ) = 1 - \mathscr{D}_{A^\complement} ( x ) \), and \( \mathscr{D}_{A^\complement} ( x ) \in \setLR{ 0 , \rho , 1} \). Thus \( \rho = 1 - \rho = 1 / 2 \). \end{proof}
\printbibliography
\end{document} | arXiv |
2022, Volume 42, Issue 6: 3039-3064. Doi: 10.3934/dcds.2022007
This issue Previous Article The mean-field limit of the Lieb-Liniger model Next Article Local well-posedness for the Maxwell-Dirac system in temporal gauge
A Cantor dynamical system is slow if and only if all its finite orbits are attracting
Silvère Gangloff1, and
Piotr Oprocha1,2, ,
AGH University of Science and Technology, Faculty of Applied Mathematics, al. Mickiewicza 30, 30-059 Kraków, Poland
Centre of Excellence IT4Innovations -, Institute for Research and Applications of Fuzzy Modeling, University of Ostrava, 30. dubna 22,701 03 Ostrava 1, Czech Republic
*Corresponding author: Piotr Oprocha
Received: July 31, 2021
Revised: October 31, 2021
This work was supported by National Science Centre, Poland (NCN), grant no. 2019/35/B/ST1/02239
In this paper we completely solve the problem of when a Cantor dynamical system $ (X, f) $ can be embedded in $ \mathbb{R} $ with vanishing derivative. For this purpose we construct a refining sequence of marked clopen partitions of $ X $ which is adapted to a dynamical system of this kind. It turns out that there is a huge class of such systems.
Graph cover,
aperiodic system,
vanishing derivative,
interval map.
Mathematics Subject Classification: Primary: 37B10, 37E05; Secondary: 05C38.
Figure 1. In this representation, the two periodic orbits of the systems are not distinguished in the first graph
Figure 2. Illustration of the graph $ G(\mathcal{U}) $ for a supercyclical partition $ \mathcal{U} $ of $ (X, f) $. The dashed regions correspond to the attracted part of the partition, the remainder corresponds to the supercyclical part
Figure 3. Removing the divergent vertices in the attracted part
Figure 4. Two finite directed graphs $ G = (V, E) $ (left) and $ G' = (V', E') $ (right) and $ \pi: V\rightarrow V' $ morphism, sending vertices of $ G $ to the one of the same color in $ G' $. The graph $ G' $ can correspond to a well-marked partition $ (\mathcal{V}, \tau, \chi) $. However it is not possible to find a partition whose graph is $ G $ and which would be well-marked relatively to $ (\mathcal{V}, \tau, \chi) $. Indeed, whatever the way we mark the red vertices, there will be at least one circuit left with no marker or no potential
Figure 5. Illustration on an example of the definition of the graphs $ \mathcal{I}(G, \chi) $ and $ \mathcal{A}(G, \chi) $ for the graph $ G = G(\mathcal{U}) $ and $ \chi: \mathcal{U} \rightarrow \{0, \downarrow, \uparrow, *\} $ where $ (\mathcal{U}, \chi) $ is a well-marked partition; the function $ \chi $ is partially represented (for simplicity): only markers and vertices with $ \chi(u) = 0 $ (the ones in dashed regions) are represented
Figure 6. Illustration of the definition of $ \iota_n $ on preimages of some vertex in $ G_{n-1} $ when this vertex is not in a circuit corresponding to a finite orbit
Figure 7. Illustration of the definition of $ \iota_n(v) $ for $ v $ a preimage by $ \pi_{n-1} $ of $ w $ which is in a circuit of the attracted part of $ \mathcal{U}_{n-1} $
[1] L. S. Block and W. A. Coppel, Dynamics in One Dimension, Lecture Notes in Mathematics, 1513. Springer-Verlag, Berlin, 1992. doi: 10.1007/BFb0084762.
[2] J. P. Boroński, J. Kupka and P. Oprocha, Edrei's conjecture revisited, Ann. Henri Poincaré, 19 (2018), 267-281. doi: 10.1007/s00023-017-0623-9.
[3] J. P. Boroński, J. Kupka and P. Oprocha, All minimal Cantor systems are slow, Bull. Lond. Math. Soc., 51 (2019), 937-944. doi: 10.1112/blms.12275.
[4] K. Ciesielski and J. Jasinski, An auto-homeomorphism of a Cantor set with derivative zero everywhere, J. Math. Anal. Appl., 434 (2016), 1267-1280. doi: 10.1016/j.jmaa.2015.09.076.
[5] M. Ciesielska and K. Ciesielski, Differentiable extension theorem: A lost proof of V. Jarník, J. Math. Anal. Appl., 454 (2017), 883-890. doi: 10.1016/j.jmaa.2017.05.032.
[6] T. Downarowicz and O. Karpel, Dynamics in dimension zero a survey, Discr. Cont. Dyn. Sys., 38 (2018), 1033-1062. doi: 10.3934/dcds.2018044.
[7] A. Edrei, On mappings which do not increase small distances, Proc. Lond. Math. Soc., 3 (1952), 272-278. doi: 10.1112/plms/s3-2.1.272.
[8] J.-M. Gambaudo and M. Martens, Algebraic topology for minimal Cantor sets, Ann. Henri Poincaré, 7 (2006), 423-446. doi: 10.1007/s00023-005-0255-3.
[9] V. Jarník, Sur l'extension du domaine de définition des fonctions d'une variable, qui laisse intacte la dérivabilité de la fonction, Bull. Internat. Acad. Sci. Boheme, (1923), 1–5.
[10] K. Meydinets, Cantor aperiodic systems and Bratteli diagrams, C. R. Acad. Sci. Paris, Ser. I, 342 (2006), 43-46. doi: 10.1016/j.crma.2005.10.024.
[11] T. Shinomura, Special homeomorphisms and approximation for Cantor systems, Top. Appl., 161 (2014), 178-195. doi: 10.1016/j.topol.2013.10.018.
[12] T. Shinomura, Zero-dimensional almost 1-1 extensions of odometers from graph covering, Top. Appl., 209 (2016), 63-90. doi: 10.1016/j.topol.2016.05.018.
[13] R. F. Williams, Local contractions of compact metric sets which are not local isometries, Proc. Amer. Math. Soc., 5 (1954), 652-654. doi: 10.1090/S0002-9939-1954-0063028-1.
Silvère Gangloff
Piotr Oprocha
In this representation, the two periodic orbits of the systems are not distinguished in the first graph
Illustration of the graph $ G(\mathcal{U}) $ for a supercyclical partition $ \mathcal{U} $ of $ (X, f) $. The dashed regions correspond to the attracted part of the partition, the remainder corresponds to the supercyclical part
Removing the divergent vertices in the attracted part
Two finite directed graphs $ G = (V, E) $ (left) and $ G' = (V', E') $ (right) and $ \pi: V\rightarrow V' $ morphism, sending vertices of $ G $ to the one of the same color in $ G' $. The graph $ G' $ can correspond to a well-marked partition $ (\mathcal{V}, \tau, \chi) $. However it is not possible to find a partition whose graph is $ G $ and which would be well-marked relatively to $ (\mathcal{V}, \tau, \chi) $. Indeed, whatever the way we mark the red vertices, there will be at least one circuit left with no marker or no potential
Illustration on an example of the definition of the graphs $ \mathcal{I}(G, \chi) $ and $ \mathcal{A}(G, \chi) $ for the graph $ G = G(\mathcal{U}) $ and $ \chi: \mathcal{U} \rightarrow \{0, \downarrow, \uparrow, *\} $ where $ (\mathcal{U}, \chi) $ is a well-marked partition; the function $ \chi $ is partially represented (for simplicity): only markers and vertices with $ \chi(u) = 0 $ (the ones in dashed regions) are represented
Illustration of the definition of $ \iota_n $ on preimages of some vertex in $ G_{n-1} $ when this vertex is not in a circuit corresponding to a finite orbit
Illustration of the definition of $ \iota_n(v) $ for $ v $ a preimage by $ \pi_{n-1} $ of $ w $ which is in a circuit of the attracted part of $ \mathcal{U}_{n-1} $ | CommonCrawl |
OSA Publishing > Optical Materials Express > Volume 10 > Issue 9 > Page 2159
Alexandra Boltasseva, Editor-in-Chief
Rabi splitting obtained in a monolayer BP-plasmonic heterostructure at room temperature
Yan Huang, Yan Liu, Yao Shao, Genquan Han, Jincheng Zhang, and Yue Hao
Yan Huang,1 Yan Liu,1,* Yao Shao,2 Genquan Han,1 Jincheng Zhang,1 and Yue Hao1
1Wide Bandgap Semiconductor Technology Disciplines State Key Laboratory, School of Microelectronics, Xidian University, Xi'an 710071, China
2Shanghai Energy Internet Research Institute of State, Grid251 Libing Road, Pudong New Area, Shanghai 201210, China
*Corresponding author: [email protected]
Y Shao
G Han
J Zhang
Y Hao
•https://doi.org/10.1364/OME.402194
Yan Huang, Yan Liu, Yao Shao, Genquan Han, Jincheng Zhang, and Yue Hao, "Rabi splitting obtained in a monolayer BP-plasmonic heterostructure at room temperature," Opt. Mater. Express 10, 2159-2170 (2020)
Strong plasmon-exciton coupling in MIM waveguide-resonator systems with WS2 monolayer (OE)
Angle-independent strong coupling between plasmonic magnetic resonances and excitons in monolayer WS2 (OE)
Strong coupling between excitons and guided modes in WS2-based nanostructures (JOSAB)
Bragg reflectors
Distributed Bragg reflectors
Light matter interactions
Original Manuscript: July 7, 2020
Revised Manuscript: August 5, 2020
Manuscript Accepted: August 6, 2020
Hybrid exciton states can be formed under the strong coupling of plasmons excited by metal nanostructures and excitons. Because of the large exciton binding energy, black phosphorus (BP) is an ideal platform to investigate the strong coupling. In this paper, we first demonstrate the strong coupling between local surface plasmon modes of different metal nanostructures and excitons in monolayer BP by adjusting the dimensions of nanostructures and polarization angle at room temperature. Moreover, the exciton dispersion curves obtained from the coupled oscillator model show the anti-crossing behavior at the exciton energy. And the Rabi splitting energies of the two different BP-metal nanostructures heterostructure are 250 meV and 202 meV, respectively, which paves a way towards the development of BP photodetectors, sensors, and emitters.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Realizing the control of the interaction of light-matter on the nanoscale has been the main goal of nanophotonics research. In addition, the study of the strong interaction between light and matter has also greatly promoted the development of photonics [1–4]. The reason for the formation of Rabi splitting is that the coherent energy exchange rate between light and matter is much larger than its energy dissipation rate. At this time, the system is in a strongly coupled state, and it is no longer the initial separation state, but a hybrid state of plasmon-exciton [5]. Strong coupling is of great significance in fundamental research and many practical applications, such as photon anti-bunching [6], single emitters [7], superconducting resonator [8], and optical detection [9]. Initially, the research and realization of the Rabi splitting were carried out in traditional inorganic semiconductor systems. However, the traditional inorganic semiconductor represented by GaAs has a defect that cannot be ignored, that is, the exciton binding energy is small, which leads to the fact that the Rabi splitting must be observed at low temperature. There will be great restrictions in practical applications [10]. One of the solutions is to use wide-bandgap semiconductors, such as ZnO, GaN and organics [11,12]. Their large exciton energies can ensure that the Rabi splitting can be observed at room temperature. However, the wide bandgap semiconductors are limited to short-wavelength applications and require complex processes, while organic systems are limited by their strong localized effects due to the disordered potential distribution. Therefore, these defects make it difficult for a system based on these two materials to achieve large Rabi splitting at room temperature.
Two-dimensional (2D) materials, such as transition-metal dichalcogenides (TMDCs), graphene and black phosphorus (BP), have attracted widespread attention in recent years due to their atomic thickness and novel physical and chemical properties [13–16]. Because of the high exciton binding energy, 2D materials provide an ideal platform for large Rabi splitting at room temperature [17]. Graphene is a typical 2D material and has been used in research on Rabi splitting. For instance, the Rabi splitting was demonstrated in a coupled semiconductor microcavity, which is composed of distributed Bragg reflectors and graphene nanoribbons. The observed Rabi splitting energy is about 10 to 12 meV, which indicates the possibility of realizing the Rabi splitting for the graphene [18]. However, graphene has a zero-bandgap, so that the turn-off of the field-effect transistor based on graphene cannot be effectively controlled [19]. Therefore, TMDC has attracted widespread attention due to its large bandgap of 1.8 eV for monolayer, which can be used for the research of large Rabi splitting at room temperature. At room temperature, the hybrid structure composed of silver nanodisk array, WS2 and an optical cavity can achieve large Rabi splitting, almost 300 meV [20]. BP, as a widely studied 2D material, has been used in many applications because of its superior physical properties, such as plasmonics, modulators, hetero-junctions, and transistors [21–25]. However, BP has rarely been explored for the research of Rabi splitting. Therefore, it makes sense to show whether BP is a suitable material for achieving large Rabi splitting. Compared to TMDCs, BP has a stronger interlayer interaction and adjustable band gap for all number of layers [26–29]. Particularly, for monolayer BP, its exciton binding energy (bandgap) can almost reach 2 eV, which makes BP one of the alternative materials in the research of Rabi splitting at room temperature.
It is well known that changes in the light-matter interaction can be achieved by adjusting the optical environment. Two structures are generally used to realize the strong light-matter interaction. The first structure is to embed molecular excitons in the optical microcavity. For example, hybrid polarized exciton states are confirmed in the hybrid structure composed of optical microcavity and MoS2 [17]. Although Rabi splitting was observed at room temperature in this structure, its Rabi splitting energy was only 46 meV. And the structure of the optical cavity is complicated, and a large number of layers bring a problem of large volume. For instance, a coupled semiconductor microcavity, composed of 32 layers distributed Bragg reflectors and graphene nanoribbons was demonstrated and the total thickness of the structure even reaches a few microns [18]. Another structure is to combine molecular excitons with metallic plasmonic structures that can excite localized surface plasmon polariton (LSPP) modes. In the interaction of light and matter, the coupling of the system and the volume of the system are inversely proportional [30]. The metal nanostructures can support the LSPP mode and can limit the incident light to the nanoscale, which makes it possible to achieve a strong local field in an ultra-small mode volume [31]. In recent years, several studies have reported hybrid systems based on the interaction between metal nanostructures and molecular materials [32–41]. For example, the interaction between the local plasmons in the gold nanorods and the excitons in the J-aggregate was studied, and the strong coupling and large Rabi splitting energy up to 200 meV was demonstrated [42].
In this paper, we report the observation of Rabi splitting by composing BP film and two different metallic plasmonic structures at room temperature. By tuning the dimensions of a plasmonic grating and disk, we can create a spectral overlap between LSPP and BP exciton. In addition, we also demonstrate the strong coupling in the nanograting and nanodisk structure through polarization angular tuning. By establishing a coupled oscillator model, the dispersion curves of the hybrid exciton states are calculated, and the Rabi splitting energy of the system is calculated accordingly. The large Rabi splitting can be realized by composing metallic nanostructures and BP film, even at room temperature, paves the way toward the strong coupling of plasmonic nanostructures with BP based on ultrathin materials in the future for various device applications, including light detection, light-harvesting, single-photon device and optically active device.
To demonstrate the Rabi splitting in different plasmonic nanostructures, we compare the reflection spectra of the heterostructure based on two different metal nanostructures. Figure 1(a) shows the three-dimensional schematic of BP-metal nano-grating heterostructure. Firstly, BP is grown on the prepared substrate, which is the SiO2 layer with a thickness of 200 nm. There is also the MgF2 layer with a thickness of 20 nm on the substrate, which is used as a buffer layer. The formation of nanostructures on the BP film can be prepared by electron beam lithography. We can adjust its plasmon resonance by tuning the width (w) of the nanograting arrays. However, previous studies have shown that BP will be rapidly oxidatively degraded under environmental conditions [43]. Later studies have shown that the BP can be preserved well by cracking the sample in dry nitrogen and then transferring it to vacuum [44]. What's more, the distance (l) between two adjacent gratings is fixed to be 150 nm. This separation distance is to make the interaction between the local plasmons of adjacent gratings negligible, so as to avoid it from affecting the coupling between LSPP and BP excitons. Furthermore, when the grating width is less than the separation distance, the coupling between the adjacent gratings is negligible for the plasmon-exciton coupling. Therefore, a series of nanogratings with a width of 80 ∼ 105 nm was formed on the BP film. The inset in Fig. 1(a) shows a schematic of the BP crystal. BP is anisotropic, with atoms arranged in two directions in the lattice, armchair direction (x-direction) and zigzag direction (y-direction). In this manuscript, the lattice direction of BP is fixed in the direction of the armchair. This is because the effective mass in the armchair direction is less than that in the zigzag direction, and a small effective mass will lead to a higher resonance frequency [30]. Figure 1(b) shows the three-dimensional schematic of BP-metal nanodisk heterostructure. The distance (d) between two adjacent nanodisks is also fixed to be 150 nm, and the radius (r) of the nanodisk ranges from 30 to 45 nm. The refractive index of silver is selected from the results from Johnson [42,45]. Compared to Ag-based plasmon systems, although Au-based plasmon systems exhibit better photochemical stability, the Rabi splitting energy is smaller [46]. In the simulation, the finite-difference time-domain (FDTD) method was used to analyze the reflection spectra of the BP-nanostructure heterostructure [47].
Fig. 1. (a) The three-dimensional schematic of BP-nanograting heterostructure. The width of each grating is w, and the distance between two adjacent gratings is l. The inset is a schematic of BP crystal structure. (b)The three-dimensional schematic of BP-nanodisk heterostructure. The radius of each disk is r, and the distance between two adjacent disks is d.
It is known that BP has an adjustable bandgap, ranging from 0.3 eV for bulk to 2 eV for monolayer, which is dependent on the thickness and near 0.3 eV in thick BP. In this article, the thickness of BP is chosen as 1 nm. In this article, the thickness of BP is chosen as 1 nm. As demonstrated in the gray curve in Fig. 2(a), it can be seen that the binding energy of BP is about 1.72 eV, which is almost the same as the J-aggregate (1.79 eV) [48,49]. As illustrated in Fig. 2(a) and Fig. 2(b), reflection spectra of different grating widths and disk radii are calculated as a function of plasmon energy. The resonance energy shifts from 1.4 to 1.9 eV while its width and radius vary. The reflection spectra of the bare grating and the disk structure completely overlap with the reflection spectrum of the BP exciton, which lays the foundation for the realization of Rabi splitting. Therefore, we plotted the reflection spectra of the complete BP-metal nano-grating heterostructure in Fig. 2(c). It can be seen that from the green line, the resonance energy has two peaks on either side of the exciton energy, these two peaks can be defined as two different resonance modes. The resonance on the right side of the exciton peak is at high energy and can be called the upper branch (UPB), while the resonance on the left side of the exciton peak is at low energy, which is called the lower branch (LPB) [50]. However, we cannot conclude that the system is strongly coupled. Because many reasons can cause double reflection peaks, for example, the corrugations in the metal nanostructures can also lead to the double peaks [51]. In addition, even without these reasons, the double reflection peaks in the reflection spectra still cannot be judged as a sign of Rabi splitting. For the intermediate coupling, there are also two peaks in the reflection spectra, even if they are not in the strongly coupled region, which makes it unreliable to distinguish strong coupling only by the presence of two reflection peaks [51]. Deep research needs to be carried out about these two resonance peaks.
Fig. 2. (a) Reflection spectra of the bare grating structure as a function of the grating width. The width of grating sizes range from 80 to 105 nm, and the distance (l) between two adjacent gratings is fixed to be 150 nm. (b) Reflection spectra of the bare disk structure as a function of the disk radius. The radius of risk sizes range from 30 to 45 nm, and the distance (d) between two adjacent disks is fixed to be 150 nm. (c) Reflection spectra of bare BP (gray line), bare Ag nanogratings (red line), and BP-nanograting heterostructure (green line). Two small red dots represent the two split reflection peaks: LPB and UPB. The thickness of BP is 1 nm, the width (w) of the grating is set as 90 nm, and the distance (l) between two adjacent gratings is fixed to be 150 nm.
The reflection spectra of the BP-grating hybrid structure by varying the widths of the grating are demonstrated in Fig. 3(a). The single reflection peak splits into two separate peaks, separated by a peak near the exciton energy (1.72 eV) revealing a strong coherent coupling between the LSPP of the grating and the exciton in BP. The two peaks redshift as the width of grating increases, in which the LPB resonance has much stronger shifts. In order to judge whether the system is strongly coupled, it is necessary to introduce strong coupling conditions in advance. Assuming that Epl and EX are the energies of the uncoupled nanostructures resonance and BP excitons, and hΩR is the splitting energy between the UPB and LPB peaks. γpl and γX correspond to the full linewidth at half maximum of the plasmons and BP exciton reflection spectra, respectively. A Rabi splitting requires this system to match the condition hΩR > (γpl+γX)/2 [20]. That is to say, for strong coupling, the Rabi splitting must exceed half the sum of the half line-width of the plasmon mode and the exciton mode. From the reflection resonant spectra of bare nanogratings and BP exciton, we can obtain γpl≈200 meV and γX≈180 meV. By fitting the reflection spectra in Fig. 3(a), we can extract the splitting energy hΩR = 250 meV. By comparing the linewidths of exciton and plasmon and splitting energies, it is found that their mathematical relationship satisfies the strong coupling condition. The same as the reflection spectra in Fig. 3(a), the reflection spectra in Fig. 3(b) correspond to the BP-disk hybrid structure and also show a single reflection peak splits into two separate peaks with the splitting energy of 202 meV (hΩR≈202 meV). The half linewidth of the disk plasmon γpl is 100 meV. Therefore, the BP-nanodisk heterostructure is also in a strong coupling state. In addition, we can also find that plasmons with two different metal structures are strongly coupled with BP excitons, resulting in large Rabi splitting energy. This shows that BP has the potential to be used in a variety of metal structures to achieve large Rabi splitting. What's more, in order to further investigate the relationship between coupling behavior and polarization angle, we analyzed the spectra of these two nanostructures when the polarization angle changed by 15 degrees. As demonstrated in Fig. 3(c), the spectra of BP-grating heterostructure are collected from 0° to 90°, corresponding to the longitudinal and transverse polarizations, respectively. The result for the 105 nm grating width shows a progressive vanish of the two exciton peaks as the polarization is changed from longitudinal (0°) to transverse (90°). As the polarization angle approaches 0 degrees, the UPB peak blueshifts and the LPB peak intensity increases, resulting in the largest difference between UPB and LPB, corresponding to the strongest coupling. The BP-nanodisk heterostructure shows a similar variation trend by varying the polarization angle from 0° to 90° in Fig. 3(d). It can be seen that as the polarization angle increases, the LPB gradually disappears, and the UPB drifts to higher energy. When the polarization angle is 0 degrees, LPB peak intensity increases to the maximum, corresponding to the strongest coupling. These similarities provide strong evidence that the exciton coupling is dependent on the polarization angle.
Fig. 3. (a) Reflection spectra of the BP-nanograting heterostructure as a function of the grating width. The width of grating sizes range from 80 to 105 nm, and the distance (l) between two adjacent gratings is fixed to be 150 nm. EX0 is exciton energy. (b) Reflection spectra of the BP-nanodisk heterostructure as a function of the disk radius. The radius of risk sizes range from 30 to 45 nm, and the distance (d) between two adjacent disks is fixed to be 150 nm. (c) Polarized reflection spectra of exciton grating with detected polarization angles of 0°, 15°, 30°, 45°, 60°, 75°, and 90°. Polarization dependence of exciton properties on BP-nanograting heterostructure with an individual grating of 105 nm and a 150 nm gap. (d) Polarized reflection spectra of exciton disk with detected polarization angles of 0°, 15°, 30°, 45°, 60°, 75°, and 90°. Polarization dependence of exciton properties on BP-nanodisk heterostructure with an individual disk of 40 nm and a 150 nm gap.
To further illustrate the dependence of the coupling behavior on the polarization angle, Fig. 4 plots the electric field distribution of the BP-nanodisk heterostructure and the bare nanodisk structure at the wavelength of 720 nm in the X-Y plane. From Fig. 4(b) and Fig. 4(d), it can be seen that the electric field intensity of the BP-nanodisk heterostructure is almost 1.5 times larger than that of the bare nanodisk structure. This shows that the coupling process of the LSPP generated by the nanodisk and BP excitons forms a new hybrid polarized exciton, which can also indicate that the heterostructure is strongly coupled. In addition, by comparing the electric field distributions in Fig. 4(c) and Fig. 4(d), we can see that the calculated electric field distributions are found to be well correlated with the polarization dependence in Fig. 3(d). In the longitudinal mode, the electric field intensity reaches the maximum at the polarization angle of 90° and almost disappears at about 0°, while the transverse mode is opposite. In addition, we found that the near field enhancement of the disk gap may also have some influences on the Rabi splitting intensity. It is noticed that the electric field enhancement in the disk gap in Fig. 4(c) is obviously larger than that in Fig. 4(a). This indicates that strong coupling can be realized in the BP-nanodisk heterostructure. These results indicate that the strong coupling is indeed dependent on the strength of the near-field enhancement in the disk gap. Under strong coupling conditions, the intensity of the Rabi splitting depends not only on the binding energy and oscillation intensity of the excitons but also on the local electric field intensity excited by the metal nanostructures.
Fig. 4. The electric field distribution of the bare disk structure in the X-Y plane at a wavelength of 720 nm when the polarization angle is (a) 0° and (b) 90°. The electric field distribution of the BP-nanodisk heterostructure in the X-Y plane at a wavelength of 720 nm when the polarization angle is (c) 0° and (d) 90°. The radius (r) of the disk is set as 40 nm, and the distance (d) between two adjacent disks is fixed to be 150 nm.
We use the Jaynes-Cummings model to describe plasmon-exciton coupling [52]. A simple coupled oscillator model (COM) was used to perform in-depth analysis and description of the observed strong coupling of the BP-metal nanostructure heterostructure [53]. To study the optical properties of heterostructures, it is necessary to analyze their optical transitions, as shown in Fig. 5(a). The exciton excitation of BP is the transition of electron ground state $\mathrm{\beta}$ to exciton excited state $\mathrm{\alpha}$ by photon energy. In heterostructures, BP excitons are coupled with LSPP excited by metal nanostructures, and energy exchange is performed between excitons and plasmons. If the rate of this energy exchange is much greater than the rate of their losses, the system is in a strongly coupled region, also known as Rabi splitting. The interaction of the LSPP excited by the metal structure and BP excitons generates a new hybrid exciton state, which is reflected as a double reflection peak in the reflection spectra and shows anti-crossing behavior. The energies of the exciton states of UPB and LPB are calculated using COM, and the double coupled system was described by the Hamiltonian matrix [54]:
(1)$$\hat{H}\textrm{ = }\hbar \left[ {\begin{array}{cc} {{E_{pl}} - i\frac{{{\gamma_{pl}}}}{2}}&{{g_x}}\\ {{g_x}}&{{E_x} - i\frac{{{\gamma_x}}}{2}} \end{array}} \right]\textrm{ = }E \cdot \left( \begin{array}{l} \alpha \\ \beta \end{array} \right)$$
Where Epl and EX are the resonance energies of plasmon and exciton modes, γpl and γX correspond to the full linewidth at half maximum of the plasmons and BP exciton, And gX represents coupling strength. α and β are Hopfield coefficients of the matrix, which represents the contribution of the plasmon mode and the exciton mode to each polarization branch, where |α|2 + |β|2 = 1. And E is the eigenvalues of the hybrid plasmon-exciton state. The eigenvalues E are obtained from the equation:
(2)$$\left( {{E_{pl}} - i\frac{{{\gamma_{pl}}}}{2}\textrm{ - }E} \right)\left( {{E_x} - i\frac{{{\gamma_x}}}{2} - E} \right) = g_x^2$$
When the linewidths of the plasmon and exciton are less than their energies, the eigenvalues E can be described as:
(3)$$E({\hbar {\omega_p}} )= \frac{{\hbar {\omega _p} + \hbar {\omega _0}}}{2} \pm \frac{1}{2}\sqrt {{{({\hbar {\Omega _R}} )}^2} + {{({\hbar {\omega_p} - \hbar {\omega_0}} )}^2}}$$
Where $\hbar {\mathrm{\omega }_0}$ and $\hbar {\mathrm{\omega }_\textrm{p}}$ are the uncoupled exciton and LSPR energies, respectively. $\hbar {\mathrm{\Omega }_\textrm{R}} = 2{g_x}\; $ is the coupling energy, which can be estimated from the spectral distance between these two peaks. As illustrated in Fig. 5(b) and Fig. 5(c), the dispersion curves of the two different hybrid BP-nanostructures plasmon-exciton states are plotted, respectively. The energy of the UPB and LPB exhibits a clear anti-crossing behavior. The Rabi splitting is found to be approximately 250 meV and 202 meV, respectively, which is lower than the highest values previously reported [20]. Although the exciton energies of monolayer BP and monolayer MoS2 are almost the same, the radiation energy of BP is higher, that is the rate of energy dissipation is larger, which will lead to a greater loss of energy in the process of light-matter interaction [28]. Therefore, the rate of coherent energy exchange between light and matter in BP will be relatively lower than that in MoS2 and the Rabi splitting energy based on BP system is lower than that based on MoS2 system. Figure 5(d) shows the change of the reflection cross-section of BP-nanodisk nanostructures with the wavelength and the radius of the nanodisk. The white dotted line represents the energy of uncoupled BP excitons. The blue dashed line in the reflection spectra represents the two polarization branches of UPB and LPB, it can be clearly seen that they exhibit anti-crossing behavior. These results confirm that the resonant coupling of excitons and LSPP is indeed a strong coupling so that the Rabi splitting can be observed in the heterostructure. In addition, in order to understand the contributions of plasmon and exciton to the two polarization branches UPB and LPB, the proportion of plasmon and exciton components in the BP nanodisk heterostructure in UPB and LPB as a function of the energy of the plasmon can be obtained through COM in Fig. 5(e) and Fig. 5(f). When the plasmon energy is larger than the exciton energy (1.72 eV), the exciton mode accounts for a larger proportion of the LPB, and the LPB behaves more like an exciton. Conversely, when the plasmon energy is less than the exciton energy, the plasmon mode accounts for a larger proportion of LPB, and LPB behaves more like plasmon. It is worth mentioning that when the plasmon energy and the exciton energy are equal, both the exciton mode and the plasmon mode participate in the coupling of the exciton and the plasmon. Therefore, when the energy of the plasmon is equal to the energy of the exciton, the coupling between the exciton and the plasmon is the strongest, and it also corresponds to the maximum Rabi splitting energy.
Fig. 5. (a) Schematic of the optical transitions in metal/BP hybrid nanostructure. (b) Dispersion curves of the hybrid BP-nanograting exciton states. Energies of reflection peaks as a function of the nanograting width extracted from the reflection spectra. The pink and blue dashed lines represent the uncoupled exciton and plasmon energies, respectively. The double-headed orange arrow stands for the Rabi splitting energy. (c) Dispersion curves of the hybrid BP-nanodisk exciton states. Energies of reflectivity peaks as a function of the nanodisk radius extracted from the reflection spectra. The blue and pink dashed lines represent the uncoupled exciton and plasmon energies, respectively. (d) The reflection peaks (blue dotted curve) as a function of the radius (r) of the disk. (e), and (f) Hopfield coefficients for the polariton branches (UPB in (e) and LPB in (f)) of BP-nanodisk heterostructure, calculated using the COM, from Eq. (1). These provide the weighting of each constituent. Blue stars correspond to the coefficients of the exciton mode and the red spheres to those of the plasmons mode.
In summary, we report the strong coupling between BP excitons and plasmons excited by two different metal nanostructures at room temperature. Rabi splitting was observed in these two hybrid systems with splitting energies of 250 meV and 202 meV, respectively. The strength of the exciton-plasmon coupling is closely dependent on the dimensions of the nanostructure and the polarization angle of the incident light. By tuning the dimensions and polarization, the system will be in a strongly coupled region. In addition, COM is used to analyze the physical mechanism of exciton-plasmon coupling in detail. In general, the heterostructure of BP-metal nanostructures makes it possible to observe large Rabi splitting at room temperature, and the large Rabi splitting indicates a new way to the research of nanophotonics based on the BP-nanostructures heterostructure and its application includes light detection, light-harvesting, single-photon device and optically active device.
National Key Research and Development Program of China (2018YFB2200500, 2018YFB2202800); National Natural Science Foundation of China (61534004, 61851406, 61874081, 91964202).
1. N. T. Fofang, T.-H. Park, O. Neumann, and N. J. Halas, "Exciton Nanoparticles: PlasmonExciton Coupling in Nanoshell-J-Aggregate Complexes," Nano Lett. 8(10), 3481–3487 (2008). [CrossRef]
2. W. Ni, T. Ambjörnsson, S. P. Apell, H. Chen, and J. Wang, "Observing Plasmonic-Molecular Resonance Coupling on Single Gold Nanorods," Nano Lett. 10(1), 77–84 (2010). [CrossRef]
3. G. P. Wiederrecht, G. A. Wurtz, and J. Hranisavljevic, "Coherent Coupling of Molecular Excitons to Electronic Polarizations of Noble Metal Nanoparticles," Nano Lett. 4(11), 2121–2125 (2004). [CrossRef]
4. H. Chen, T. Ming, L. Zhao, F. Wang, and C.-H. Yan, "Plasmon−molecule interactions," Nano Today 5(5), 494–505 (2010). [CrossRef]
5. P. Törmä and W. L. Barnes, "Strong coupling between surface plasmon polaritons and emitters: a review," Rep. Prog. Phys. 78(1), 013901 (2015). [CrossRef]
6. D. Press, S. Gotzinger, S. Reitzenstein, C. Hofmann, A. Loffler, M. Kamp, A. Forchel, and Y. Yamamoto, "Photon Antibunching from a Single Quantum-Dot-Microcavity System in the Strong Coupling Regime," Phys. Rev.Lett. 98(11), 117402 (2007). [CrossRef]
7. D. E. Chang, A. S. Sorensen, P. R. Hemmer, and M. D. Lukin, "Strong coupling of single emitters to surface plasmons," Phys. Rev. B 76(3), 035420 (2007). [CrossRef]
8. F. R. Ong, P. Bertet, and D. Vion, "Strong coupling of a spin ensemble to a superconducting resonator," Phys. Rev. Lett. 105(14), 140502 (2010). [CrossRef]
9. S. Wang, Q. Levan, and T. Peyronel, "Plasmonic Nanoantenna Arrays as Efficient Etendue Reducers for Optical Detection," ACS Photonics 5(6), 2478–2485 (2018). [CrossRef]
10. E. Peter, P. Senellart, D. Martrou, A. Lemaître, and J. Hours, "Exciton-photon strong-coupling regime for a single quantum dot embedded in a microcavity," Phys. Rev. Lett. 95(6), 067401 (2005). [CrossRef]
11. J. J. Baumberg, A. V. Kavokin, S. Christopoulos, and R. Butté, "Spontaneous polarization buildup in a room-temperature polariton laser," Phys. Rev. Lett. 101(13), 136409 (2008). [CrossRef]
12. F. Li, L. Orosz, O. Kamoun, and S. Bouchoule, "From excitonic to photonic polariton condensate in a ZnO-based microcavity," Phys. Rev. Lett. 110(19), 196406 (2013). [CrossRef]
13. F. Xia, H. Wang, D. Xiao, M. Dubey, and A. Ramasubramaniam, "Two-dimensional material nanophotonics," Nat. Photonics 8(12), 899–907 (2014). [CrossRef]
14. F. H. Koppens, T. Mueller, P. Avouris, A. Ferrari, M. Vitiello, and M. Polini, "Photodetectors based on graphene, other two-dimensional materials and hybrid systems," Nat. Nanotechnol. 9(10), 780–793 (2014). [CrossRef]
15. S. Z. Butler, S. M. Hollen, L. Cao, Y. Cui, J. A. Gupta, H. R. Gutiérrez, T. F. Heinz, S. S. Hong, J. Huang, A. F. Ismach, E. Johnston-Halperin, M. Kuno, V. V. Plashnitsa, R. D. Robinson, R. S. Ruoff, S. Salahuddin, J. Shan, L. Shi, M. G. Spencer, M. Terrones, W. Windl, and J. E. Goldberger, "Progress, Challenges, and Opportunities in Two-Dimensional Materials Beyond Graphene," ACS Nano 7(4), 2898–2926 (2013). [CrossRef]
16. Q. H. Wang, K. Kalantar-Zadeh, A. Kis, J. N. Coleman, and M. S. Strano, "Electronics and optoelectronics of two-dimensional transition metal dichalcogenides," Nat. Nanotechnol. 7(11), 699–712 (2012). [CrossRef]
17. X. Liu, T. Galfsky, Z. Sun, F. Xia, E. Lin, Y. Lee, S. Kéna-Cohen, and V. M. Menon, "Strong light–matter coupling in two-dimensional atomic crystals," Nat. Photonics 9(1), 30–34 (2015). [CrossRef]
18. S. Imannezhad and S. Shojaei, "Robust exciton–polariton Rabi splitting in graphene nano ribbons: the means of two-coupled semiconductor microcavities," Opt. Eng. 57(04), 1 (2018). [CrossRef]
19. S. Wang, X. Ouyang, Z. Feng, Y. Cao, M. Gu, and X. Li, "Diffractive photonic applications mediated by laser reduced graphene oxides," Opto-Electron. Adv. 1(2), 17000201–17000208 (2018). [CrossRef]
20. B. Li, S. Zu, Z. Zhang, L. Zheng, Q. Jiang, B. Du, and Z. Fang, "Large Rabi splitting obtained in Ag-WS2 strong-coupling heterostructure with optical microcavity at room temperature," Opto-Electron. Adv. 2(5), 19000801–19000809 (2019). [CrossRef]
21. M. Buscema, D. J. Groenendijk, G. A. Steele, H. S. van der Zant, and A. Castellanos Gomez, "Photovoltaic effect in few-layer black phosphorus PN junctions defined by local electrostatic gating," Nat. Commun. 5(1), 4651 (2014). [CrossRef]
22. L. Han, L. Wang, H. Xing, and X. Chen, "Active Tuning of Midinfrared Surface Plasmon Resonance and Its Hybridization in Black Phosphorus Sheet Array," ACS Photonics 5(9), 3828–3837 (2018). [CrossRef]
23. M. Buscema, D. J. Groenendijk, S. I. Blanter, G. A. Steele, H. S. J. van der Zant, and A. Castellanos-Gomez, "Fast and broadband photoresponse of few-layer black phosphorus field-effect transistors," Nano Lett. 14(6), 3347–3352 (2014). [CrossRef]
24. Y. Deng, Z. Luo, N. J. Conrad, H. Liu, Y. Gong, S. Najmaei, P. M. Ajayan, J. Lou, X. Xu, and P. D. Ye, "Black phosphorus-monolayer MoS2 van der Waals heterojunction p-n diode," ACS Nano 8(8), 8292–8299 (2014). [CrossRef]
25. M. Engel, M. Steiner, and P. Avouris, "Black phosphorus photodetector for multispectral, high-resolution imaging," Nano Lett. 14(11), 6414–6417 (2014). [CrossRef]
26. Y. Du, C. Ouyang, S. Shi, and M. Lei, "Ab initio studies on atomic and electronic structures of black phosphorus," J. Appl. Phys. 17, 093718 (2009). [CrossRef]
27. Ø Prytz and E. Flage-Larsen, "The influence of exact exchange corrections in vander Waals layered narrow bandgap black phosphorus," J. Phys.: Condens. Matter 22(1), 015502 (2010). [CrossRef]
28. S. Narita, Y. Akaham, Y. Tsukiyama, K. Muro, S. Mori, S. Endo, M. Taniguchi, M. Seki, S. Suga, A. Mikuni, and H. Kanzaki, "Electrical and optical properties of black phosphorus single crystals," Physica 117-118, 422–424 (1983). [CrossRef]
29. Y. Maruyama, S. Suzuki, K. Kobayashi, and S. Tanuma, "Synthesis and some properties of black phosphorus single crystals," Physica 105(1-3), 99–102 (1981). [CrossRef]
30. Y. Huang and Y. Liu, "Active Tuning of Hybridization Effects of Mid-infrared Surface Plasmon Resonance in Black Phosphorus Sheet Array and Metal Grating Slit," Opt. Mater. Express 10(1), 14–28 (2020). [CrossRef]
31. P. B. Johnson and R. W. Christy, "Optical Constants of the Noble Metals," Phys. Rev. B: Solid State 6(12), 4370–4379 (1972). [CrossRef]
32. O. Pérez-González, N. Zabala, A. G. Borisov, N. J. Halas, P. Nordlander, and J. Aizpurua, "Optical Spectroscopy of Conductive Junctions in Plasmonic Cavities," Nano Lett. 10(8), 3090–3095 (2010). [CrossRef]
33. P. Vasa, R. Pomraenke, G. Cirmi, E. De Re, W. Wang, S. Schwieger, D. Leipold, E. Runge, G. Cerullo, and C. Lienau, "Ultrafast manipulation of strong coupling in metal-molecular aggregate hybrid nanostructures," ACS Nano 4(12), 7559–7565 (2010). [CrossRef]
34. N. T. Fofang, N. K. Grady, Z. Fan, A. O. Govorov, and N. J. Halas, "Exciton Dynamics: Exciton−Plasmon Coupling in a J-Aggregate−Au Nanoshell Complex Provides a Mechanism for Nonlinearity," Nano Lett. 11(4), 1556–1560 (2011). [CrossRef]
35. Y. B. Zheng, B. K. Juluri, L. Lin Jensen, D. Ahmed, M. Lu, L. Jensen, and T. J. Huang, "Dynamic Tuning of Plasmon–Exciton Coupling in Arrays of Nanodisk–J-aggregate Complexes," Adv. Mater. 22(32), 3603–3607 (2010). [CrossRef]
36. J. Dintinger, S. Klein, F. Bustos, W. L. Barnes, and T. W. Ebbesen, "Strong coupling between surface plasmon-polaritons and organic molecules in subwavelength hole arrays," Phys. Rev. B 71(3), 035424 (2005). [CrossRef]
37. B. J. Lawrie, K. W. Kim, D. P. Norton, and R. F. Haglund, "Plasmon–Exciton Hybridization in ZnO Quantum-Well Al Nanodisc Heterostructures," Nano Lett. 12(12), 6152–6157 (2012). [CrossRef]
38. M. J. Achermann, "Exciton–Plasmon Interactions in Metal–Semiconductor Nanostructures," J. Phys. Chem. Lett. 1(19), 2837–2843 (2010). [CrossRef]
39. P. Vasa, R. Pomraenke, S. Schwieger, Y. I. Mazur, V. Kunets, P. Srinivasan, E. Johnson, J. E. Kihm, D. S. Kim, E. Runge, G. Salamo, and C. Lienau, "Coherent exciton - Surface plasmon polariton interactions in hybrid metal semiconductor nanostructures," Phys. Rev. Lett. 101(11), 116801 (2008). [CrossRef]
40. A. O. Govorov, G. W. Bryant, W. Zhang, T. Skeini, J. Lee, N. A. Kotov, J. M. Slocik, and R. R. Naik, "Exciton Plasmon Interaction and Hybrid Excitons in Semiconductor Metal Nanoparticle Assemblies," Nano Lett. 6(5), 984–994 (2006). [CrossRef]
41. D. E. Gomez, K. C. Vernon, P. Mulvaney, and T. J. Davis, "Surface Plasmon Mediated Strong Exciton Photon Coupling in Semiconductor Nanocrystals," Nano Lett. 10(1), 274–278 (2010). [CrossRef]
42. D. Melnikau, R. Esteban, D. Savateeva, A. Sanchez-Iglesias, M. Grzelczak, M. K. Schmidt, L. M. Liz-Marzan, J. Aizpurua, and Y. P. Rakovich, "Rabi Splitting in Photoluminescence Spectra of Hybrid Systems of Gold Nanorods and J-Aggregates," J. Phys. Chem. Lett. 7(2), 354–362 (2016). [CrossRef]
43. S.-L. Yau, T. P. Moffat, and M. M. Lerner, "STM of the (010) surface of orthorhombic phosphorus," Chem. Phys. Lett. 198(3-4), 383–388 (1992). [CrossRef]
44. A. Favron, E. Gaufrès, and R. Martel, "Photooxidation and quantum confinement effects in exfoliated black phosphorus," Nat. Mater. 14(8), 826–832 (2015). [CrossRef]
45. D. Felbacq, E. Cambril, and P. Valvin, "Giant Rabi splitting between localized mixed plasmon-exciton states in a two-dimensional array of nanosize metallic disks in an organic semiconductor," Phys. Rev. B 80(3), 033303 (2009). [CrossRef]
46. P. Vasa, W. Wang, R. Pomraenke, M. Lammers, M. Maiuri, C. Manzoni, G. Cerullo, and C. Lienau, "Real-time observation of ultrafast Rabi oscillations between excitons and plasmons in metal nanostructures with J-aggregates," Nat. Photonics 7(2), 128–132 (2013). [CrossRef]
47. Z. Liu and K. Aydin, "Localized surface plasmons in nanostructured monolayer black phosphorus," Nano Lett. 16(6), 3457–3462 (2016). [CrossRef]
48. A. Sobhani, A. Lauchner, S. Najmaei, C. Ayala-Orozco, F. Wen, J. Lou, and N. J. Halas, "Enhancing the photocurrent and photoluminescence of single crystal monolayer MoS2 with resonant plasmonic nanoshells," Appl. Phys. Lett. 104(3), 031112 (2014). [CrossRef]
49. A. E. Schlather, N. Large, A. S. Urban, P. Nordlander, and N. J. Halas, "Near-Field Mediated Exciton Coupling and Giant Rabi Splitting in Individual Metallic Dimers," Nano Lett. 13(7), 3281–3286 (2013). [CrossRef]
50. Z. Chai, X. Hu, and Q. Gong, "Exciton polaritons based on planar dielectric Si asymmetric nanogratings coupled with J-aggregated dyes film," Front. Optoelectron. 13(1), 4–11 (2020). [CrossRef]
51. H. Leng, B. Szychowski, M. Daniel, and M. Pelton, "Strong coupling and induced transparency at room temperature with single quantum dots and gap plasmons," Nat. Commun. 9(1), 4012 (2018). [CrossRef]
52. B. W. Shore and P. L. Knight, "The Jaynes-Cummings Model," J. Mod. Opt. 40(7), 1195–1238 (1993). [CrossRef]
53. D. Zheng, S. Zhang, Q. Deng, M. Kang, P. Nordlander, and H. Xu, "Manipulating Coherent Plasmon−Exciton Interaction in a Single Silver Nanorod on Monolayer WSe2," Nano Lett. 17(6), 3809–3814 (2017). [CrossRef]
54. S. Rudin and T. L. Reinecke, "Oscillator model for vacuum Rabi splitting in microcavities," Phys. Rev. B 59(15), 10227–10233 (1999). [CrossRef]
N. T. Fofang, T.-H. Park, O. Neumann, and N. J. Halas, "Exciton Nanoparticles: PlasmonExciton Coupling in Nanoshell-J-Aggregate Complexes," Nano Lett. 8(10), 3481–3487 (2008).
W. Ni, T. Ambjörnsson, S. P. Apell, H. Chen, and J. Wang, "Observing Plasmonic-Molecular Resonance Coupling on Single Gold Nanorods," Nano Lett. 10(1), 77–84 (2010).
G. P. Wiederrecht, G. A. Wurtz, and J. Hranisavljevic, "Coherent Coupling of Molecular Excitons to Electronic Polarizations of Noble Metal Nanoparticles," Nano Lett. 4(11), 2121–2125 (2004).
H. Chen, T. Ming, L. Zhao, F. Wang, and C.-H. Yan, "Plasmon−molecule interactions," Nano Today 5(5), 494–505 (2010).
P. Törmä and W. L. Barnes, "Strong coupling between surface plasmon polaritons and emitters: a review," Rep. Prog. Phys. 78(1), 013901 (2015).
D. Press, S. Gotzinger, S. Reitzenstein, C. Hofmann, A. Loffler, M. Kamp, A. Forchel, and Y. Yamamoto, "Photon Antibunching from a Single Quantum-Dot-Microcavity System in the Strong Coupling Regime," Phys. Rev.Lett. 98(11), 117402 (2007).
D. E. Chang, A. S. Sorensen, P. R. Hemmer, and M. D. Lukin, "Strong coupling of single emitters to surface plasmons," Phys. Rev. B 76(3), 035420 (2007).
F. R. Ong, P. Bertet, and D. Vion, "Strong coupling of a spin ensemble to a superconducting resonator," Phys. Rev. Lett. 105(14), 140502 (2010).
S. Wang, Q. Levan, and T. Peyronel, "Plasmonic Nanoantenna Arrays as Efficient Etendue Reducers for Optical Detection," ACS Photonics 5(6), 2478–2485 (2018).
E. Peter, P. Senellart, D. Martrou, A. Lemaître, and J. Hours, "Exciton-photon strong-coupling regime for a single quantum dot embedded in a microcavity," Phys. Rev. Lett. 95(6), 067401 (2005).
J. J. Baumberg, A. V. Kavokin, S. Christopoulos, and R. Butté, "Spontaneous polarization buildup in a room-temperature polariton laser," Phys. Rev. Lett. 101(13), 136409 (2008).
F. Li, L. Orosz, O. Kamoun, and S. Bouchoule, "From excitonic to photonic polariton condensate in a ZnO-based microcavity," Phys. Rev. Lett. 110(19), 196406 (2013).
F. Xia, H. Wang, D. Xiao, M. Dubey, and A. Ramasubramaniam, "Two-dimensional material nanophotonics," Nat. Photonics 8(12), 899–907 (2014).
F. H. Koppens, T. Mueller, P. Avouris, A. Ferrari, M. Vitiello, and M. Polini, "Photodetectors based on graphene, other two-dimensional materials and hybrid systems," Nat. Nanotechnol. 9(10), 780–793 (2014).
S. Z. Butler, S. M. Hollen, L. Cao, Y. Cui, J. A. Gupta, H. R. Gutiérrez, T. F. Heinz, S. S. Hong, J. Huang, A. F. Ismach, E. Johnston-Halperin, M. Kuno, V. V. Plashnitsa, R. D. Robinson, R. S. Ruoff, S. Salahuddin, J. Shan, L. Shi, M. G. Spencer, M. Terrones, W. Windl, and J. E. Goldberger, "Progress, Challenges, and Opportunities in Two-Dimensional Materials Beyond Graphene," ACS Nano 7(4), 2898–2926 (2013).
Q. H. Wang, K. Kalantar-Zadeh, A. Kis, J. N. Coleman, and M. S. Strano, "Electronics and optoelectronics of two-dimensional transition metal dichalcogenides," Nat. Nanotechnol. 7(11), 699–712 (2012).
X. Liu, T. Galfsky, Z. Sun, F. Xia, E. Lin, Y. Lee, S. Kéna-Cohen, and V. M. Menon, "Strong light–matter coupling in two-dimensional atomic crystals," Nat. Photonics 9(1), 30–34 (2015).
S. Imannezhad and S. Shojaei, "Robust exciton–polariton Rabi splitting in graphene nano ribbons: the means of two-coupled semiconductor microcavities," Opt. Eng. 57(04), 1 (2018).
S. Wang, X. Ouyang, Z. Feng, Y. Cao, M. Gu, and X. Li, "Diffractive photonic applications mediated by laser reduced graphene oxides," Opto-Electron. Adv. 1(2), 17000201–17000208 (2018).
B. Li, S. Zu, Z. Zhang, L. Zheng, Q. Jiang, B. Du, and Z. Fang, "Large Rabi splitting obtained in Ag-WS2 strong-coupling heterostructure with optical microcavity at room temperature," Opto-Electron. Adv. 2(5), 19000801–19000809 (2019).
M. Buscema, D. J. Groenendijk, G. A. Steele, H. S. van der Zant, and A. Castellanos Gomez, "Photovoltaic effect in few-layer black phosphorus PN junctions defined by local electrostatic gating," Nat. Commun. 5(1), 4651 (2014).
L. Han, L. Wang, H. Xing, and X. Chen, "Active Tuning of Midinfrared Surface Plasmon Resonance and Its Hybridization in Black Phosphorus Sheet Array," ACS Photonics 5(9), 3828–3837 (2018).
M. Buscema, D. J. Groenendijk, S. I. Blanter, G. A. Steele, H. S. J. van der Zant, and A. Castellanos-Gomez, "Fast and broadband photoresponse of few-layer black phosphorus field-effect transistors," Nano Lett. 14(6), 3347–3352 (2014).
Y. Deng, Z. Luo, N. J. Conrad, H. Liu, Y. Gong, S. Najmaei, P. M. Ajayan, J. Lou, X. Xu, and P. D. Ye, "Black phosphorus-monolayer MoS2 van der Waals heterojunction p-n diode," ACS Nano 8(8), 8292–8299 (2014).
M. Engel, M. Steiner, and P. Avouris, "Black phosphorus photodetector for multispectral, high-resolution imaging," Nano Lett. 14(11), 6414–6417 (2014).
Y. Du, C. Ouyang, S. Shi, and M. Lei, "Ab initio studies on atomic and electronic structures of black phosphorus," J. Appl. Phys. 17, 093718 (2009).
Ø Prytz and E. Flage-Larsen, "The influence of exact exchange corrections in vander Waals layered narrow bandgap black phosphorus," J. Phys.: Condens. Matter 22(1), 015502 (2010).
S. Narita, Y. Akaham, Y. Tsukiyama, K. Muro, S. Mori, S. Endo, M. Taniguchi, M. Seki, S. Suga, A. Mikuni, and H. Kanzaki, "Electrical and optical properties of black phosphorus single crystals," Physica 117-118, 422–424 (1983).
Y. Maruyama, S. Suzuki, K. Kobayashi, and S. Tanuma, "Synthesis and some properties of black phosphorus single crystals," Physica 105(1-3), 99–102 (1981).
Y. Huang and Y. Liu, "Active Tuning of Hybridization Effects of Mid-infrared Surface Plasmon Resonance in Black Phosphorus Sheet Array and Metal Grating Slit," Opt. Mater. Express 10(1), 14–28 (2020).
P. B. Johnson and R. W. Christy, "Optical Constants of the Noble Metals," Phys. Rev. B: Solid State 6(12), 4370–4379 (1972).
O. Pérez-González, N. Zabala, A. G. Borisov, N. J. Halas, P. Nordlander, and J. Aizpurua, "Optical Spectroscopy of Conductive Junctions in Plasmonic Cavities," Nano Lett. 10(8), 3090–3095 (2010).
P. Vasa, R. Pomraenke, G. Cirmi, E. De Re, W. Wang, S. Schwieger, D. Leipold, E. Runge, G. Cerullo, and C. Lienau, "Ultrafast manipulation of strong coupling in metal-molecular aggregate hybrid nanostructures," ACS Nano 4(12), 7559–7565 (2010).
N. T. Fofang, N. K. Grady, Z. Fan, A. O. Govorov, and N. J. Halas, "Exciton Dynamics: Exciton−Plasmon Coupling in a J-Aggregate−Au Nanoshell Complex Provides a Mechanism for Nonlinearity," Nano Lett. 11(4), 1556–1560 (2011).
Y. B. Zheng, B. K. Juluri, L. Lin Jensen, D. Ahmed, M. Lu, L. Jensen, and T. J. Huang, "Dynamic Tuning of Plasmon–Exciton Coupling in Arrays of Nanodisk–J-aggregate Complexes," Adv. Mater. 22(32), 3603–3607 (2010).
J. Dintinger, S. Klein, F. Bustos, W. L. Barnes, and T. W. Ebbesen, "Strong coupling between surface plasmon-polaritons and organic molecules in subwavelength hole arrays," Phys. Rev. B 71(3), 035424 (2005).
B. J. Lawrie, K. W. Kim, D. P. Norton, and R. F. Haglund, "Plasmon–Exciton Hybridization in ZnO Quantum-Well Al Nanodisc Heterostructures," Nano Lett. 12(12), 6152–6157 (2012).
M. J. Achermann, "Exciton–Plasmon Interactions in Metal–Semiconductor Nanostructures," J. Phys. Chem. Lett. 1(19), 2837–2843 (2010).
P. Vasa, R. Pomraenke, S. Schwieger, Y. I. Mazur, V. Kunets, P. Srinivasan, E. Johnson, J. E. Kihm, D. S. Kim, E. Runge, G. Salamo, and C. Lienau, "Coherent exciton - Surface plasmon polariton interactions in hybrid metal semiconductor nanostructures," Phys. Rev. Lett. 101(11), 116801 (2008).
A. O. Govorov, G. W. Bryant, W. Zhang, T. Skeini, J. Lee, N. A. Kotov, J. M. Slocik, and R. R. Naik, "Exciton Plasmon Interaction and Hybrid Excitons in Semiconductor Metal Nanoparticle Assemblies," Nano Lett. 6(5), 984–994 (2006).
D. E. Gomez, K. C. Vernon, P. Mulvaney, and T. J. Davis, "Surface Plasmon Mediated Strong Exciton Photon Coupling in Semiconductor Nanocrystals," Nano Lett. 10(1), 274–278 (2010).
D. Melnikau, R. Esteban, D. Savateeva, A. Sanchez-Iglesias, M. Grzelczak, M. K. Schmidt, L. M. Liz-Marzan, J. Aizpurua, and Y. P. Rakovich, "Rabi Splitting in Photoluminescence Spectra of Hybrid Systems of Gold Nanorods and J-Aggregates," J. Phys. Chem. Lett. 7(2), 354–362 (2016).
S.-L. Yau, T. P. Moffat, and M. M. Lerner, "STM of the (010) surface of orthorhombic phosphorus," Chem. Phys. Lett. 198(3-4), 383–388 (1992).
A. Favron, E. Gaufrès, and R. Martel, "Photooxidation and quantum confinement effects in exfoliated black phosphorus," Nat. Mater. 14(8), 826–832 (2015).
D. Felbacq, E. Cambril, and P. Valvin, "Giant Rabi splitting between localized mixed plasmon-exciton states in a two-dimensional array of nanosize metallic disks in an organic semiconductor," Phys. Rev. B 80(3), 033303 (2009).
P. Vasa, W. Wang, R. Pomraenke, M. Lammers, M. Maiuri, C. Manzoni, G. Cerullo, and C. Lienau, "Real-time observation of ultrafast Rabi oscillations between excitons and plasmons in metal nanostructures with J-aggregates," Nat. Photonics 7(2), 128–132 (2013).
Z. Liu and K. Aydin, "Localized surface plasmons in nanostructured monolayer black phosphorus," Nano Lett. 16(6), 3457–3462 (2016).
A. Sobhani, A. Lauchner, S. Najmaei, C. Ayala-Orozco, F. Wen, J. Lou, and N. J. Halas, "Enhancing the photocurrent and photoluminescence of single crystal monolayer MoS2 with resonant plasmonic nanoshells," Appl. Phys. Lett. 104(3), 031112 (2014).
A. E. Schlather, N. Large, A. S. Urban, P. Nordlander, and N. J. Halas, "Near-Field Mediated Exciton Coupling and Giant Rabi Splitting in Individual Metallic Dimers," Nano Lett. 13(7), 3281–3286 (2013).
Z. Chai, X. Hu, and Q. Gong, "Exciton polaritons based on planar dielectric Si asymmetric nanogratings coupled with J-aggregated dyes film," Front. Optoelectron. 13(1), 4–11 (2020).
H. Leng, B. Szychowski, M. Daniel, and M. Pelton, "Strong coupling and induced transparency at room temperature with single quantum dots and gap plasmons," Nat. Commun. 9(1), 4012 (2018).
B. W. Shore and P. L. Knight, "The Jaynes-Cummings Model," J. Mod. Opt. 40(7), 1195–1238 (1993).
D. Zheng, S. Zhang, Q. Deng, M. Kang, P. Nordlander, and H. Xu, "Manipulating Coherent Plasmon−Exciton Interaction in a Single Silver Nanorod on Monolayer WSe2," Nano Lett. 17(6), 3809–3814 (2017).
S. Rudin and T. L. Reinecke, "Oscillator model for vacuum Rabi splitting in microcavities," Phys. Rev. B 59(15), 10227–10233 (1999).
Achermann, M. J.
Ahmed, D.
Aizpurua, J.
Ajayan, P. M.
Akaham, Y.
Ambjörnsson, T.
Apell, S. P.
Avouris, P.
Ayala-Orozco, C.
Aydin, K.
Barnes, W. L.
Baumberg, J. J.
Bertet, P.
Blanter, S. I.
Borisov, A. G.
Bouchoule, S.
Bryant, G. W.
Buscema, M.
Bustos, F.
Butler, S. Z.
Butté, R.
Cambril, E.
Cao, L.
Cao, Y.
Castellanos Gomez, A.
Castellanos-Gomez, A.
Cerullo, G.
Chai, Z.
Chang, D. E.
Chen, H.
Chen, X.
Christopoulos, S.
Christy, R. W.
Cirmi, G.
Coleman, J. N.
Conrad, N. J.
Cui, Y.
Daniel, M.
Davis, T. J.
De Re, E.
Deng, Q.
Deng, Y.
Dintinger, J.
Du, B.
Du, Y.
Dubey, M.
Ebbesen, T. W.
Endo, S.
Engel, M.
Esteban, R.
Fan, Z.
Fang, Z.
Favron, A.
Felbacq, D.
Feng, Z.
Ferrari, A.
Flage-Larsen, E.
Fofang, N. T.
Forchel, A.
Galfsky, T.
Gaufrès, E.
Goldberger, J. E.
Gomez, D. E.
Gong, Q.
Gong, Y.
Gotzinger, S.
Govorov, A. O.
Grady, N. K.
Groenendijk, D. J.
Grzelczak, M.
Gu, M.
Gupta, J. A.
Gutiérrez, H. R.
Haglund, R. F.
Halas, N. J.
Han, L.
Heinz, T. F.
Hemmer, P. R.
Hofmann, C.
Hollen, S. M.
Hong, S. S.
Hours, J.
Hranisavljevic, J.
Hu, X.
Huang, J.
Huang, T. J.
Huang, Y.
Imannezhad, S.
Ismach, A. F.
Jensen, L.
Jiang, Q.
Johnson, E.
Johnson, P. B.
Johnston-Halperin, E.
Juluri, B. K.
Kalantar-Zadeh, K.
Kamoun, O.
Kamp, M.
Kang, M.
Kanzaki, H.
Kavokin, A. V.
Kéna-Cohen, S.
Kihm, J. E.
Kim, D. S.
Kim, K. W.
Kis, A.
Klein, S.
Knight, P. L.
Kobayashi, K.
Koppens, F. H.
Kotov, N. A.
Kunets, V.
Kuno, M.
Lammers, M.
Large, N.
Lauchner, A.
Lawrie, B. J.
Lee, J.
Lee, Y.
Lei, M.
Leipold, D.
Lemaître, A.
Leng, H.
Lerner, M. M.
Levan, Q.
Li, B.
Li, F.
Li, X.
Lienau, C.
Lin, E.
Lin Jensen, L.
Liu, H.
Liu, X.
Liu, Z.
Liz-Marzan, L. M.
Loffler, A.
Lou, J.
Lu, M.
Lukin, M. D.
Luo, Z.
Maiuri, M.
Manzoni, C.
Martel, R.
Martrou, D.
Maruyama, Y.
Mazur, Y. I.
Melnikau, D.
Menon, V. M.
Mikuni, A.
Ming, T.
Moffat, T. P.
Mori, S.
Mueller, T.
Mulvaney, P.
Muro, K.
Naik, R. R.
Najmaei, S.
Narita, S.
Neumann, O.
Ni, W.
Nordlander, P.
Norton, D. P.
Ong, F. R.
Orosz, L.
Ouyang, C.
Ouyang, X.
Park, T.-H.
Pelton, M.
Pérez-González, O.
Peter, E.
Peyronel, T.
Plashnitsa, V. V.
Polini, M.
Pomraenke, R.
Press, D.
Prytz, Ø
Rakovich, Y. P.
Ramasubramaniam, A.
Reinecke, T. L.
Reitzenstein, S.
Robinson, R. D.
Rudin, S.
Runge, E.
Ruoff, R. S.
Salahuddin, S.
Salamo, G.
Sanchez-Iglesias, A.
Savateeva, D.
Schlather, A. E.
Schmidt, M. K.
Schwieger, S.
Seki, M.
Senellart, P.
Shan, J.
Shi, L.
Shi, S.
Shojaei, S.
Shore, B. W.
Skeini, T.
Slocik, J. M.
Sobhani, A.
Sorensen, A. S.
Spencer, M. G.
Srinivasan, P.
Steele, G. A.
Strano, M. S.
Suga, S.
Sun, Z.
Suzuki, S.
Szychowski, B.
Taniguchi, M.
Tanuma, S.
Terrones, M.
Törmä, P.
Tsukiyama, Y.
Urban, A. S.
Valvin, P.
van der Zant, H. S.
van der Zant, H. S. J.
Vasa, P.
Vernon, K. C.
Vion, D.
Vitiello, M.
Wang, F.
Wang, H.
Wang, J.
Wang, L.
Wang, Q. H.
Wang, S.
Wang, W.
Wen, F.
Wiederrecht, G. P.
Windl, W.
Wurtz, G. A.
Xia, F.
Xiao, D.
Xing, H.
Xu, H.
Xu, X.
Yamamoto, Y.
Yan, C.-H.
Yau, S.-L.
Ye, P. D.
Zabala, N.
Zhang, S.
Zhang, W.
Zhang, Z.
Zhao, L.
Zheng, D.
Zheng, L.
Zheng, Y. B.
Zu, S.
ACS Nano (3)
ACS Photonics (2)
Adv. Mater. (1)
Chem. Phys. Lett. (1)
Front. Optoelectron. (1)
J. Appl. Phys. (1)
J. Mod. Opt. (1)
J. Phys. Chem. Lett. (2)
J. Phys.: Condens. Matter (1)
Nano Lett. (13)
Nano Today (1)
Nat. Commun. (2)
Nat. Mater. (1)
Nat. Nanotechnol. (2)
Opt. Eng. (1)
Opt. Mater. Express (1)
Opto-Electron. Adv. (2)
Phys. Rev. B (4)
Phys. Rev. B: Solid State (1)
Phys. Rev.Lett. (1)
Physica (2)
Rep. Prog. Phys. (1)
Equations on this page are rendered with MathJax. Learn more.
(1) H ^ = ℏ [ E p l − i γ p l 2 g x g x E x − i γ x 2 ] = E ⋅ ( α β )
(2) ( E p l − i γ p l 2 - E ) ( E x − i γ x 2 − E ) = g x 2
(3) E ( ℏ ω p ) = ℏ ω p + ℏ ω 0 2 ± 1 2 ( ℏ Ω R ) 2 + ( ℏ ω p − ℏ ω 0 ) 2 | CommonCrawl |
TWIRL
In cryptography and number theory, TWIRL (The Weizmann Institute Relation Locator) is a hypothetical hardware device designed to speed up the sieving step of the general number field sieve integer factorization algorithm.[1] During the sieving step, the algorithm searches for numbers with a certain mathematical relationship. In distributed factoring projects, this is the step that is parallelized to a large number of processors.
TWIRL is still a hypothetical device — no implementation has been publicly reported. However, its designers, Adi Shamir and Eran Tromer, estimate that if TWIRL were built, it would be able to factor 1024-bit numbers in one year at the cost of "a few dozen million US dollars". TWIRL could therefore have enormous repercussions in cryptography and computer security — many high-security systems still use 1024-bit RSA keys, which TWIRL would be able to break in a reasonable amount of time and for reasonable costs.
The security of some important cryptographic algorithms, notably RSA and the Blum Blum Shub pseudorandom number generator, rests in the difficulty of factorizing large integers. If factorizing large integers becomes easier, users of these algorithms will have to resort to using larger keys (computationally more expensive) or to using different algorithms, whose security rests on some other computationally hard problem (like the discrete logarithm problem).
See also
• Custom hardware attack
• TWINKLE
References
1. Shamir, Adi; Tromer, Eran (2003), "Factoring Large Numbers with the TWIRL Device", Advances in Cryptology – CRYPTO 2003, Springer Berlin Heidelberg, pp. 1–26, doi:10.1007/978-3-540-45146-4_1, ISBN 9783540406747
External links
• "The TWIRL integer factorization device" - homepage
| Wikipedia |
Kinetic & Related Models
June 2016 , Volume 9 , Issue 2
Select all articles
Export/Reference:
Approximating the $M_2$ method by the extended quadrature method of moments for radiative transfer in slab geometry
Graham W. Alldredge, Ruo Li and Weiming Li
2016, 9(2): 237-249 doi: 10.3934/krm.2016.9.237 +[Abstract](1261) +[PDF](875.5KB)
We consider the simplest member of the hierarchy of the extended quadrature method of moments (EQMOM), which gives equations for the zeroth-, first-, and second-order moments of the energy density of photons in the radiative transfer equations in slab geometry. First we show that the equations are well-defined for all moment vectors consistent with a nonnegative underlying distribution, and that the reconstruction is explicit and therefore computationally inexpensive. Second, we show that the resulting moment equations are hyperbolic. These two properties make this moment method quite similar to the attractive but far more expensive $M_2$ method. We confirm through numerical solutions to several benchmark problems that the methods give qualitatively similar results.
Graham W. Alldredge, Ruo Li, Weiming Li. Approximating the $M_2$ method by the extended quadrature method of moments for radiative transfer in slab geometry. Kinetic & Related Models, 2016, 9(2): 237-249. doi: 10.3934\/krm.2016.9.237.
Time asymptotics for a critical case in fragmentation and growth-fragmentation equations
Marie Doumic and Miguel Escobedo
2016, 9(2): 251-297 doi: 10.3934/krm.2016.9.251 +[Abstract](1353) +[PDF](1228.8KB)
Fragmentation and growth-fragmentation equations is a family of problems with varied and wide applications. This paper is devoted to the description of the long-time asymptotics of two critical cases of these equations, when the division rate is constant and the growth rate is linear or zero. The study of these cases may be reduced to the study of the following fragmentation equation: $$\frac{\partial }{\partial t} u(t,x) + u(t,x) = \int\limits_x^\infty k_0 (\frac{x}{y}) u(t,y) dy.$$ Using the Mellin transform of the equation, we determine the long-time behavior of the solutions. Our results show in particular the strong dependence of this asymptotic behavior with respect to the initial data.
Marie Doumic, Miguel Escobedo. Time asymptotics for a critical case in fragmentation and growth-fragmentation equations. Kinetic & Related Models, 2016, 9(2): 251-297. doi: 10.3934\/krm.2016.9.251.
Sharp regularity properties for the non-cutoff spatially homogeneous Boltzmann equation
Léo Glangetas, Hao-Guang Li and Chao-Jiang Xu
In this work, we study the Cauchy problem for the spatially homogeneous non-cutoff Boltzamnn equation with Maxwellian molecules. We prove that this Cauchy problem enjoys Gelfand-Shilov's regularizing effect, meaning that the smoothing properties are the same as the Cauchy problem defined by the evolution equation associated to a fractional harmonic oscillator. The power of the fractional exponent is exactly the same as the singular index of the non-cutoff collisional kernel of the Boltzmann equation. Therefore, we get the sharp regularity of solutions in the Gevrey class and also the sharp decay of solutions with an exponential weight. We also give a method to construct the solution of the Boltzmann equation by solving an infinite system of ordinary differential equations. The key tool is the spectral decomposition of linear and non-linear Boltzmann operators.
L\u00E9o Glangetas, Hao-Guang Li, Chao-Jiang Xu. Sharp regularity properties for the non-cutoff spatially homogeneous Boltzmann equation. Kinetic & Related Models, 2016, 9(2): 299-371. doi: 10.3934\/krm.2016.9.299.
An accurate and efficient discrete formulation of aggregation population balance equation
Jitendra Kumar, Gurmeet Kaur and Evangelos Tsotsas
An efficient and accurate discretization method based on a finite volume approach is presented for solving aggregation population balance equation. The principle of the method lies in the introduction of an extra feature that is beyond the essential requirement of mass conservation. The extra feature controls more precisely the behaviour of a chosen integral property of the particle size distribution that does not remain constant like mass, but changes with time. The new method is compared to the finite volume scheme recently proposed by Forestier and Mancini (SIAM J. Sci. Comput., 34, B840 - B860). It retains all the advantages of this scheme, such as simplicity, generality to apply on uniform or nonuniform meshes and computational efficiency, and improves the prediction of the complete particle size distribution as well as of its moments. The numerical results of particle size distribution using the previous finite volume method are consistently overpredicting, which is reflected in the form of the diverging behaviour of second or higher moments for large extent of aggregation. However, the new method controls the growth of higher moments very well and predicts the zeroth moment with high accuracy. Consequently, the new method becomes a powerful tool for the computation of gelling problems. The scheme is validated and compared with the existing finite volume method against several aggregation problems for suitably selected aggregation kernels, including analytically tractable and physically relevant kernels.
Jitendra Kumar, Gurmeet Kaur, Evangelos Tsotsas. An accurate and efficient discrete formulation of aggregation population balance equation. Kinetic & Related Models, 2016, 9(2): 373-391. doi: 10.3934\/krm.2016.9.373.
Global solutions to the relativistic Vlasov-Poisson-Fokker-Planck system
Lan Luo and Hongjun Yu
Global solutions to the relativistic Vlasov-Poisson-Fokker-Planck system near the relativistic Maxwellian are constructed based on an approach by combining the compensating function and energy method. In addition, an exponential rate in time of the solution to its equilibrium is obtained.
Lan Luo, Hongjun Yu. Global solutions to the relativistic Vlasov-Poisson-Fokker-Planck system. Kinetic & Related Models, 2016, 9(2): 393-405. doi: 10.3934\/krm.2016.9.393.
Parameter extraction of complex production systems via a kinetic approach
Ali K. Unver, Christian Ringhofer and M. Emir Koksal
2016, 9(2): 407-427 doi: 10.3934/krm.2016.9.407 +[Abstract](989) +[PDF](596.9KB)
Continuum models of re-entrant production systems are developed that treat the flow of products in analogy to traffic flow. Specifically, the dynamics of material flow through a re-entrant factory via a parabolic conservation law is modeled describing the product density and flux in the factory. The basic idea underlying the approach is to obtain transport coefficients for fluid dynamic models in a multi-scale setting simultaneously from Monte Carlo simulations and actual observations of the physical system, i.e. the factory. Since partial differential equation (PDE) conservation laws are successfully used for modeling the dynamical behavior of product flow in manufacturing systems, a re-entrant manufacturing system is modeled using a diffusive PDE. The specifics of the production process enter into the velocity and diffusion coefficients of the conservation law. The resulting nonlinear parabolic conservation law model allows fast and accurate simulations. With the traffic flow-like PDE model, the transient behavior of the discrete event simulation (DES) model according to the averaged influx, which is obtained out of discrete event experiments, is predicted. The work brings out an almost universally applicable tool to provide rough estimates of the behavior of complex production systems in non-equilibrium regimes.
Ali K. Unver, Christian Ringhofer, M. Emir Koksal. Parameter extraction of complex production systems via a kinetic approach. Kinetic & Related Models, 2016, 9(2): 407-427. doi: 10.3934\/krm.2016.9.407.
RSS this journal
Tex file preparation
Open Choice
Abstracted in
Add your name and e-mail address to receive news of forthcoming issues of this journal:
Select the journal
Select Journals | CommonCrawl |
\begin{document}
\title{Finding and assessing treatment effect sweet spots in clinical trial data}
\begin{abstract} Identifying heterogeneous treatment effects (HTEs) in randomized controlled trials is an important step toward understanding and acting on trial results. However, HTEs are often small and difficult to identify, and HTE modeling methods which are very general can suffer from low power. We present a method that exploits any existing relationship between illness severity and treatment effect, and identifies the ``sweet spot", the contiguous range of illness severity where the estimated treatment benefit is maximized. We further compute a bias-corrected estimate of the conditional average treatment effect (CATE) in the sweet spot, and a $p$-value. Because we identify a single sweet spot and $p$-value, we believe our method to be straightforward to interpret and actionable: results from our method can inform future clinical trials and help clinicians make personalized treatment recommendations. \end{abstract}
\section{Introduction}
Randomized trials often need large sample sizes to achieve adequate statistical power, and recruiting patients across the spectrum of illness severity can make it easier to find a sufficiently large cohort. However, this recruitment strategy can be a double-edged sword, as patients with mild or severe illness may receive little benefit from treatment~\cite{redelmeier2020approach}. For patients with mild illness, treatment may be superfluous; for patients with severe illness, treatment may be futile. Including these patients in a study may bias a study toward the null, as the study results rely on a subset of patients with illness severity in the middle range.
We present a simple approach that identifies the single contiguous range of illness severity where the estimated treatment benefit is maximized. We consider this range to be the ``sweet spot" for treatment. We further present methods to compute a bias-corrected estimate of the conditional average treatment effect (CATE), and to control type I error. Because we identify a single sweet spot and compute a $p$-value, we believe our method to be straightforward to interpret and actionable: results from our method can inform future clinical trials and help clinicians make personalized treatment recommendations.
As a running example to illustrate our method, we use the AQUAMAT trial~\cite{dondorp2010artesunate}, which compares artesunate to quinine in the treatment of severe falciparum malaria in African children. This randomized trial studied $\num{5488}$ children with severe malaria, with primary outcome in-hospital mortality. This study was conducted between Oct 3, 2005, and July 14, 2010. Half of the patients were randomized to receive artesunate, and half to receive quinine. The trial found that artesunate substantially reduces mortality in African children with severe malaria. The patients in this study were diverse across $45$ measured covariates including age, sex, complications on admission, vitals, and labs (for a full description of covariates, we refer to \cite{dondorp2010artesunate}). This diversity makes it more likely that some patients would do well or poorly regardless of care, though it is not obvious how to identify them: a Mantel-Haenszel subgroup analysis showed no evidence of any differences in outcomes between subgroups. However, the reanalysis of this trial in \cite{watson2020graphing} suggests that there may be treatment effect heterogeneity along the axis of illness severity.
In Figure \ref{fig:intro}, we visualize the smoothed treatment effect estimate as a function of illness severity for the patients in this trial. From this image, it is tempting to determine that there is a range of illness severity where patients receive more benefit from treatment; acting on this determination can be dangerous, however, as the apparent sweet spot could be due to chance alone. Our statistical framework protects against this by finding sweet spots and judging their significance.
\begin{figure}
\caption{Smoothed treatment effect as a function of illness severity on the AQUAMAT randomized trial. By visual inspection, it seems that patients with illness severity in the shaded range seem to benefit more from treatment.}
\label{fig:intro}
\end{figure}
\section{Related work}
There has been recent interest in developing methods to estimate heterogeneous treatment effects in randomized trials \cite{redelmeier2020approach, zhao2013effectively, athey2016recursive, athey2019generalized, kunzel2019metalearners, watson2020machine, watson2020graphing}.
Athey and Imbens~\cite{athey2016recursive} model treatment effect as a function of patient covariates: the causal tree estimator is a tree that partitions the covariate space into subsets where patients have lower-than or higher-than-average treatment effects, and allows valid inference for causal effects in randomized trials. Causal trees can be combined in an ensemble to form a causal forest~\cite{athey2019generalized} which is more flexible, though harder to interpret. Instead of modeling treatment effect, Watson and Holmes~\cite{watson2020machine} identify the presence of treatment effect heterogeneity with control of type I error. This method uses a statistical test to compare treated and untreated outcomes among a subgroup of patients predicted to benefit from treatment. This is repeated many times on different subsets of the data, and the tests are summarized into a single $p$-value that accounts for multiple hypothesis testing. As in \cite{athey2016recursive}, this method may uncover treatment effect heterogeneity even when the relationship between covariates and the outcome is complex.
However, heterogeneous treatment effects are often small and difficult to identify, and methods which are very general can suffer from low power. Rather than search the full covariate space, the methods in Redelmeier and Tibshirani~\cite{redelmeier2020approach}, Watson and Holmes~\cite{watson2020graphing} and Kent et al~\cite{kent2010assessing} look directly at the relationship between a precomputed measure of illness severity and treatment effect. The method in \cite{redelmeier2020approach} orders patients by increasing illness severity, computes the cumulative treatment effect, and compares the goodness of fit of a line and a Gompertz CDF -- there is no heterogeneity when the cumulative treatment effect is linear. The method in \cite{watson2020graphing} models individual treatment effect as a function of illness severity using a localised reweighting kernel. Finally, the method in~\cite{kent2010assessing} stratifies patients by risk, and then estimates the treatment effect separately on each stratum. None of these methods quantify type I error or statistical power.
\section{Our proposed method} Our method exploits any existing relationship between between illness severity and treatment effect, and searches for the contiguous range of illness severity where the estimated treatment benefit is maximized. We further compute a bias-corrected estimate of the conditional average treatment effect (CATE) in the sweet spot, and compute a $p$-value.
\begin{algorithm}
\caption{Finding and assessing a sweet spot on clinical trial data\\for a randomized trial design with a control$:$treated ratio of $k$:$1$}
\begin{enumerate}
\item Compute a predilection score for each patient that indicates illness severity.
\item Create sets of patients with similar scores, consisting of $k$ controls and one treated patient. On each matched set, compute the treatment effect, the difference between the treated outcome and average control outcome.
\item Perform an exhaustive search to identify the sweet spot --- the range of illness severity scores where the treatment effect is maximal.
\item Test the null hypothesis that there is no sweet spot related to illness severity with a permutation test.
\item Debias the estimate of the treatment effect inside and outside the sweet spot using the parametric bootstrap.
\end{enumerate}
\label{alg:overview} \end{algorithm}
\subsection{Computing predilection scores} Illness severity is characterized by prognosis at baseline: we refer to this as a \textit{predilection score}, as it represents the patient's baseline predilection to the outcome. Predilection scores are computed from a model trained to predict the outcome from the baseline covariates, and they may take on any real values. To model continuous or binary outcomes, we recommend linear or logistic regression.
There are two important considerations when fitting the predilection score model. First, the model must be trained only on the control patients, as the intervention may have altered the natural history of the treated patients. Second, prevalidation must be used to avoid overfitting to the controls~\cite{tibshirani2002pre, abadie2018endogenous}; prevalidation ensures that every patient's prediction comes from a model trained solely on other patients. To do $k$-fold prevalidation, partition the controls evenly into $k$ sets, train a predilection score model on $k-1$ sets and use this model to compute scores on the remaining set. This is repeated so that every set is held out, and as a result, no patient is used to train the model that ultimately computes their predilection score. We illustrate this on a small example in Table~\ref{table:preval}. We thank Lu Tian, Trevor Hastie and Rocky Aikens for bringing this to our attention, and we further motivate the need for prevalidation in Section~\ref{section:preval}.
We note that it may not be necessary to train a new predilection score model when there already exists a model trained on an external dataset of patients who received the ``control'' treatment.
\begin{table}[H] \begin{center}
\begin{tabular}{ l | l } train & predict\\ \hline
controls 3--10 & controls 1, 2 \\
controls 1, 2, 5--10 & controls 3, 4 \\
controls 1--4, 7--10 & controls 5, 6 \\
controls 1--6, 9, 10 & controls 7, 8 \\
controls 1--8 & controls 9, 10 \\ \end{tabular} \caption{An example of five-fold prevalidation on $10$ controls. More generally, when doing $k$-fold prevalidation, we use $k$ models, each trained on $\frac{(k-1)n}{k}$ controls and used to compute the score on the remaining $\frac{n}{k}$ controls.} \label{table:preval} \end{center} \end{table}
\begin{example*} To compute predilection scores on the AQUAMAT data, we choose logistic regression: our outcome is in-hospital mortality. We begin with $10$-fold prevalidation to compute scores for the $\num{2743}$ control patients in the trial, and then we fit a model on all $\num{2743}$ controls and compute the predilection scores of the treated patients. The predilection scores have moderate goodness-of-fit (AUROC = $0.82$, AUPRC = $0.78$), and we report the odds ratios in Figure~\ref{fig:logreg}. We also illustrate the distribution of scores for the treated and control patients. \end{example*}
\begin{figure}
\caption{Odds ratios for the predilection score model, and the distributions of predilection scores for treated and control patients. An odds ratio above $1$ indicates increased risk of death.}
\label{fig:logreg}
\end{figure}
\subsection{Estimating treatment effects} \label{section:treatmenteffects}
We now estimate treatment effect as a function of predilection score. For a trial design with a control$:$treated ratio of $k$:$1$, we use optimal matching~\cite{hansen2019package} to form groups of $k$ controls and one treated patient with similar predilection scores. Each group's predilection score is their average predilection score, and for binary or continuous outcomes, their conditional average treatment effect (CATE) is the mean difference between the treated and control outcomes within the group.
\begin{example*} On the AQUAMAT data, we have a control$:$treated ratio of $1$:$1$, and we form $\num{2743}$ sets containing one control and one treated patient with similar predilection scores. On each matched set, we compute the treatment effect as the difference in in-hospital mortality between the treated and control patient. For example matched sets and their estimated treatment effects, see Table~\ref{table:examplescores}; for the treatment effect as a function of predilection score, see Figure~\ref{fig:matched_triplets}. \end{example*}
\begin{table}[H] \centering
\begin{tabular}{ r r | r r | r r } \multicolumn{2}{c }{predilection score} & \multicolumn{2}{c }{in-hospital mortality} & & \\ \hline \multicolumn{1}{c}{control} & \multicolumn{1}{c}{treated} & \multicolumn{1}{c}{control} & \multicolumn{1}{c}{treated} & \multicolumn{1}{c}{mean score} & \multicolumn{1}{c}{CATE} \\ \toprule $-4.21$ & $-4.21$ & $0$ & $1$ & $-4.21$ & $-1$\\ \rule{0pt}{2.6ex} $-5.34$ & $-5.38$ & $0$ & $0$ & $-5.36$ & $0$\\ \rule{0pt}{2.6ex} $-1.98$ & $-1.97$ & $1$ & $1$ & $-1.97$ & $0$\\ \rule{0pt}{2.6ex} $-4.78$ & $-4.78$ & $1$ & $0$ & $-4.78$ & $1$\\
\end{tabular} \caption{Predilection scores, outcomes and estimated treatment effects for four example matched sets in the AQUAMAT data. A lower predilection score indicates less severe illness.} \label{table:examplescores} \end{table}
\begin{figure}
\caption{On the left, treatment effect as a function of illness severity on the AQUAMAT randomized trial. On the right, a smoothing spline fit to the treatment effects suggests a region of predilection scores where patients respond more strongly to treatment.}
\label{fig:matched_triplets}
\end{figure}
\subsection{Finding the sweet spot}
\label{section:sweetspot} We have defined $\mathbf{t} = \{t_k\}_{k=1}^n$ and $\mathbf{s} = \{s_k\}_{k=1}^n$, the sequences of estimated treatment effects and predilection scores of our matched sets, both ordered by increasing predilection score. That is, $s_1 \leq s_i \leq s_n$, and $t_i$ is the treatment effect for the set with predilection score $s_i$.
We would like to identify a contiguous subsequence of $\mathbf{t}$ that (1) is long (to cover as many patients as possible), and (2) has a large average treatment effect. To measure the extent to which any subsequence meets our requirements, we use the length of the sequence (which captures criterion 1) times the difference between the sequence average and the global average (which captures criterion 2). Explicitly, for the subsequence of $\mathbf{t}$ consisting of $\{t_i, t_{i+1}, \dots, t_j\}$, compute: \[
Z(i, j) = (j-i+1) \left( \text{mean}\left(\{t_k\}_{k=i}^j\right) - \text{mean}\left(\{t_k\}_{k=1}^n\right) \right). \]
The values of $i$ and $j$ that maximize $Z$ indicate the location of the sweet spot, and they are found by an exhaustive search over $i \in [1, n-1],\, j \in [i+1, n]$. We write these values as $(\hat{i}, \hat{j}) = \arg \max_{i, j} Z(i, j)$; the sweet spot includes patients with predilection scores between $s_{\hat{i}}$ and $s_{\hat{j}}$.
\begin{example*} On the AQUAMAT data, the maximum value of $Z$ is $\widehat{Z} = 47.47$, with sweet spot $(\hat{i}, \hat{j}) = (2153, 2631)$ corresponding to patients with predilection scores between $-1.70$ and $-0.20$. The mean treatment effect in this range is $0.12$; outside this range, it is $0.00$. This is illustrated in Figure~\ref{fig:sweet_spot_1}. \end{example*}
\begin{figure}
\caption{The sweet spot identified on the AQUAMAT data. On the left, we highlight the range of predilection scores in the sweet spot. On the right, we show the mean treatment effect estimate inside the sweet spot. For illustration, we include a smoothing spline fit to the treatment effect estimate.}
\label{fig:sweet_spot_1}
\end{figure}
\subsection{Calibrating} \label{section:pvalue}
We wish to test the null hypothesis that there is no sweet spot related to illness severity. We ask: if there were \textit{no sweet spot}, how often would we observe a value at least as large as $\widehat{Z} = Z(\hat{i}, \hat{j})$?
Suppose there is no sweet spot, and the true treatment benefit is the same across the entire range of illness severity. In this case, the ordering of the treatment effect sequence $\mathbf{t}$ does not matter: with the same probability, we could have observed any permutation of $\mathbf{t}$, and the maximum value of $Z$ on the permuted sequence would be similar to $\widehat{Z}$. However, if there is a sweet spot, the ordering of $\mathbf{t}$ is meaningful, and $\widehat{Z}$ would be larger than most of the maximum values of $Z$ on the permuted sequences.
We test our null hypothesis with a permutation test: we repeatedly shuffle the values of $\mathbf{t}$ and find the maximum value of $Z$ on the permuted sequence. The $p$-value is the relative frequency that the maximum value on the permuted sequence is at least as large as $\widehat{Z}$.
\begin{example*} We do a permutation test on the AQUAMAT data. In Section~\ref{section:treatmenteffects}, we computed the ordered sequence of treatment effects, and in Section~\ref{section:sweetspot} we chose the sweet spot corresponding to $\widehat{Z} = 47.47$. For \num{1000} iterations, we permuted the sequence of treatment effects and computed the maximum value of $Z$. On one of those permutations, we observed a value of $Z$ that was larger than $47.47$, which corresponds to $p$-value $0.001$. We visualize our permutation test in Figure~\ref{fig:pvalue}. \end{example*}
\begin{figure}
\caption{Sweet spot permutation test on the AQUAMAT data. }
\label{fig:pvalue}
\end{figure}
\subsection{Debiasing}
Finally, we wish to estimate the treatment effect in the sweet spot. The naive choice is the mean treatment effect in the sweet spot: $\hat{\tau} = \text{mean}(\{t_{\hat{i}}, \dots, t_{\hat{j}}\})$. However, this may be optimistic, as searching for the sweet spot may bias the treatment effect. We debias our estimate with the parametric bootstrap~\cite{tibshirani1993introduction}, using our sweet spot as a model to simulate data.
Having computed treatment effects $\mathbf{t} = \{t_k\}_{k=1}^n$ and a sweet spot location $[\hat{i}, \hat{j}]$, we generate a new sequence of treatment effects $\mathbf{t}^*$. The values inside the sweet spot $\{t^*_k\}_{k=\hat{i}}^{\hat{j}}$ are sampled with replacement from $\{t_k\}_{k=\hat{i}}^{\hat{j}}$, the values inside the sweet spot on the original sequence. Likewise, the values outside the sweet spot, $\{t^*_k\}_{k=1}^{\hat{i}-1} \cup \{t^*_k\}_{k=\hat{j}+1}^{n}$, are sampled with replacement from $\{t_k\}_{k=1}^{\hat{i}-1} \cup \{t_k\}_{k=\hat{j}+1}^{n}$. We repeatedly simulate data using this method, find its sweet spot, and estimate the CATE in the sweet spot. Our bootstrapped bias estimate is the difference between the mean bootstrapped CATE estimate, $\hat{\tau}_\text{boot}$, and $\hat{\tau}$. To bias-correct $\hat{\tau}$, subtract the bias: \begin{align*} \hat{\tau}_{\text{corrected}} &= \hat{\tau}- \widehat{\text{bias}}\\ &= \hat{\tau}- \left(\hat{\tau}_\text{boot} - \hat{\tau}\right)\\ &= 2\, \hat{\tau}- \hat{\tau}_\text{boot}. \end{align*}
In every run of the bias-correction bootstrap, we also obtain an estimate of the location of the sweet spot. These estimates form an empirical distribution around the values of $\hat{i}$ and $\hat{j}$ in the original estimation of the sweet spot.
\begin{example*} On the AQUAMAT data, we found $\hat{\tau} = 0.123$. Our bootstrap estimate was $\hat{\tau}_\text{boot} = 0.126$, so we overestimated by $0.003$ on average. We adjust our estimate down to $\hat{\tau}_\text{corrected} = 0.120$ (Figure~\ref{fig:biashistogram}). \end{example*}
\begin{figure}
\caption{Visualizations from the de-biasing parametric bootstrap on the AQUAMAT data. On the left, we visualize bias: we overestimate the CATE by $0.003$. On the right, we visualize uncertainty around the location of the sweet spot.}
\label{fig:biashistogram}
\end{figure}
\subsection{More results on real data}
We summarise our results on the AQUAMAT data in a single image (Figure~\ref{fig:final}). Our results include the original and debiased estimates of the CATE inside and outside the sweet spot, together with our bootstrapped distributions of the start and end index of the sweet spot. We also visualize the smoothed treatment effect as a function of predilection score.
\begin{figure}
\caption{Sweet spot found on the AQUAMAT data.}
\label{fig:final}
\end{figure}
On the AQUAMAT data, we compare our method to the Gompertz fit in \cite{redelmeier2020approach}, the causal forest in \cite{athey2019generalized}, and the reference class approach in \cite{watson2020graphing}. To compute a $p$-value for the Gompertz method, we use the bootstrap as defined in Section~\ref{section:pvalue}.
\begin{figure}
\caption{Comparison of methods on the AQUAMAT data.}
\label{fig:comparison}
\end{figure}
We also illustrate our method on the SEAQUAMAT trial as in \cite{watson2020machine}, which compared quinine to artesunate for the treatment of severe malaria in Asian adults. The superiority of artesunate for severe malaria is now well established~\cite{white2014451}, and in this retrospective analysis, we consider artesunate to be standard of care. We follow the data preparation and experimental setup in \cite{watson2020machine}, and our finding agrees with theirs: we fail to reject the null hypothesis that there is no range of illness severity for which quinine is superior.
\begin{figure}
\caption{No sweet spot found on the SEAQUAMAT trial data.}
\label{fig:seaquamat}
\end{figure}
\section{Simulation studies} \subsection{Type I error} \label{section:type1error} To compute the type I error of our method, we simulate randomized trial data with no sweet spot. In each of our $\num{1000}$ simulations, covariates for $400$ patients are drawn from a standard multivariate normal distribution in $10$ dimensions, and patients are assigned to receive treatment with probability $0.5$. The probability of a negative outcome is determined by a logistic model with coefficients drawn from a standard multivariate normal distribution in $10$ dimensions and a normally distributed error term. For patients who receive treatment, this probability is lowered by $0.05$ (the treatment effect). On our simulated data, we find our method to be well-calibrated, and this is illustrated in Figure~\ref{fig:type1}.
\begin{figure}
\caption{Type I error on simulated data.}
\label{fig:type1}
\end{figure}
\subsection{Power} \label{section:power}
To compute the power of our method, we again simulate randomized trial data as in Section~\ref{section:type1error}, however now we add an extra treatment effect for patients in the middle range of illness severity. In Figure~\ref{fig:power}, we examine power along two axes: the first is the extra treatment effect in the sweet spot, and the second is the size of the sweet spot. We compare our method to a causal forest~\cite{athey2019generalized} and to the method in \cite{watson2020machine}, ``ML analysis plans for randomized controlled trials". The latter computes a $p$-value and is directly comparable to our method. To compare our method to the former, we do the following: on our simulated data, we fit a causal forest and predict outcomes, and then we perform a one-sided $t$-test that tests whether the mean predicted outcome in the sweet spot is larger than that outside the sweet spot.
In this setting, our method has the highest power: this is expected, as we simulated data that matches our method's assumptions. This comparison is included to illustrate that all methods struggle when the sweet spot is small (covering only about $10\%$ of the data), and when there is little extra benefit in the sweet spot ($ \leq 20\%$).
\begin{figure}
\caption{Power as a function of sweet spot size and magnitude on simulated data. ``ML $p$-value" refers to the method in \cite{watson2020machine}.}
\label{fig:power}
\end{figure}
We repeat this experiment, this time defining the sweet spot location as a region defined by three of the ten covariates. The methods in \cite{athey2019generalized} and \cite{watson2020machine} are tree based, and should be able to discover the relevant covariates. In all methods, the power is low compared to what the sweet spot method achieves in Figure~\ref{fig:power}, and the method in \cite{watson2020machine} has superior performance when the true sweet spot is large and strong. A sweet spot search over the full covariate space can only have large power when there is a large effect.
\begin{figure}
\caption{Power as a function of sweet spot size and magnitude on simulated data, where the sweet spot is defined by three of the ten covariates. ``ML $p$-value" refers to the method in \cite{watson2020machine}.}
\label{fig:power2}
\end{figure}
\section{Further notes and discussion}
\subsection{Prevalidation} \label{section:preval}
To compute treatment effects, we match a treated patient with a control who has similar illness severity, and we compute the difference in their outcomes. However, we do not know the \textit{true} illness severity for any patient; rather, we fit a predilection score model, and use predilection score as a proxy for illness severity. So, though we think of our matched sets as sets of patients with similar illness severity, they are more precisely sets of patients with similar \textit{predilection scores}.
Though subtle, this distinction is important. Without prevalidation, we may overfit our predilection score model to the controls. If we imagine overfitting to the extreme, a large predilection score for a control indicates only that the model knows the control had a negative outcome; a small score likewise indicates a positive outcome. Our model has no such knowledge of the future for the treated patients.
Overfitting will have downstream effects: we use the predilection scores to pair treated and control patients. In the pairs with larger predilection scores, we overestimate the treatment effect: the control is more likely to have had a negative outcome than the treated patient. Similarly, we underestimate the treatment effect in the pairs with smaller predilection scores, as the control is more likely to have had a positive outcome.
As a result, without prevalidating the predilection score model, we inject treatment effect heterogeneity into our data, which causes us to lose control of type I error: we are more likely to find a sweet spot when there is none. This is discussed in detail in \cite{abadie2018endogenous}.
We illustrate this in Figure~\ref{fig:preval} on simulated data, with $n=800$ trial participants and $p=10$ and $p=100$ covariates. In scenarios where we are more prone to overfitting (in this example, when $p=100$), our problem becomes more pronounced.
\begin{figure}
\caption{Type I error on simulated data, with and without prevalidation. We simulate data as in Section~\ref{section:type1error}.}
\label{fig:preval}
\end{figure}
\subsection{Calibration}
Without a measure of calibration, it can be tempting to erroneously find a sweet spot. In Figure~\ref{fig:calibrationtrick}, we show a sample drawn from a data generating process with no sweet spot; by visual inspection, however, it is tempting to identify a sweet spot over the range of the $70^{\text{th}} - 84^{\text{th}}$ predilection scores. Our permutation test $p$-value is $0.160$; it correctly finds no sweet spot.
\begin{figure}
\caption{Simulated data with no sweet spot, though it is tempting to find a sweet spot by visual inspection.}
\label{fig:calibrationtrick}
\end{figure}
\section{Discussion} The idea behind our method is simple: identify the range of illness severity where treatment benefit is maximized, estimate the benefit inside and outside this range, and test the null hypothesis that there is no treatment effect heterogeneity. There are existing methods for modeling treatment effect heterogeneity in randomized trials: some model the treatment effect as a function of covariates, and others identify the presence of treatment effect heterogeneity. Our method is unique as it does both, and our results are straightforward to visualize and interpret. Further, our method has a natural extension to multi-arm trials: treatments can be compared pairwise (as we have illustrated here with a treated and control group), or treatments may be compared to the group of all other treatments.
When the trial has a survival endpoint where patients may be right-censored (as in \cite{redelmeier2020approach}), estimating the treatment effect is complicated by censoring. We do not know the true outcome for all patients, and it is less obvious how to directly compare outcomes within a matched pair. This is an open area for future research.
In this paper, we exploited the relationship between treatment effect and illness severity. When the treatment effect is independent of illness severity (or it cannot be expressed by our predilection score model), our method will not find the sweet spot. In principle, another measure could be used in place of illness severity, though this is advisable only when there is a natural choice for a particular dataset. Simulations in Section~\ref{section:power} show that identifying small or weak sweet spots remains an open challenge.
\begin{appendices} \section{Finding the sweet spot} We can speed up our computation of $Z(i, j) = \sum_{k=i}^j t_k - \frac{j-i+1}{n} \sum_{k=1}^n t_k.$ by vectorizing. For each $k < n$, we can simultaneously compute the vector of values of $Z(i, j)$ that satisfy $j-i=k$. First, we compute the cumulative sum of $\mathbf{t}$, denoted $\mathbf{t}^*$, such that $t_w^*= \sum_{k=1}^w t_w$. We then compute: \[ \mathbf{Z}(k) = \{ t^*_{k+1}, t^*_{k+2}, \dots, t^*_{n} \} - \{ t^*_1, t^*_2, \dots, t^*_{n-k} \} - k \frac{t^*_n}{n}. \]
For very large studies, we can conserve computing time by choosing to consider only e.g. ranges where $j-i$ is even. We may also restrict to ranges within a minimum and maximum size: for example, further study of the treatment may be practical only if the sweet spot covers at least $10\%$ of the patients in the trial.
\end{appendices}
{}
\end{document} | arXiv |
Max Noether's theorem
In algebraic geometry, Max Noether's theorem may refer to the results of Max Noether:
• Several closely related results of Max Noether on canonical curves
• AF+BG theorem, or Max Noether's fundamental theorem, a result on algebraic curves in the projective plane, on the residual sets of intersections
• Max Noether's theorem on curves lying on algebraic surfaces, which are hypersurfaces in P3, or more generally complete intersections
• Noether's theorem on rationality for surfaces
• Max Noether theorem on the generation of the Cremona group by quadratic transformations
See also
• Noether's theorem, usually referring to a result derived from work of Max's daughter Emmy Noether
• Noether inequality
• Special divisor
• Hirzebruch–Riemann–Roch theorem
| Wikipedia |
Environment: Seabed data
The seabed data fall naturally into two distinct groups, governing the shape of the seabed and its properties.
Seabed shape data
Shape type
Three types of seabed shape are available:
A flat seabed is a simple plane, which can be horizontal or sloping.
A profiled seabed is one where the shape is specified by a 2D profile in a particular direction. Normal to that profile direction the seabed is horizontal.
A 3D seabed allows you to specify a fully general 3D surface for the seabed, by defining the depth at a series of $X,Y$ positions, with a choice of linear or cubic polynomial interpolation in between.
Seabed origin, depth
The seabed origin is a point on the seabed and it is the origin relative to which the seabed data are specified. It is defined by giving its coordinates with respect to global axes.
For a flat seabed, if you enter a value for the seabed origin $Z$ coordinate, then the depth value (the water depth at the seabed origin) will be updated accordingly, and vice versa: if you enter a depth, the $Z$ coordinate will be updated, based on the value of sea surface Z.
For profile and 3D seabeds, the $Z$ coordinate and water depth at the seabed origin are displayed but they are not editable: they are determined by the $Z$ values in the profile or 3D geometry data and the given sea surface Z.
The seabed direction follows the OrcaFlex direction conventions, so is measured positive anti-clockwise from the global $X$ axis when viewed from above. The meaning of this direction depends on the type of seabed in use:
For a flat seabed, the direction is that of maximum upwards slope. For example, 0° means sloping upwards in the global $X$ direction, 90° means sloping up in the in the global $Y$ direction.
For a profile seabed, the direction is that in which the 2D profile is defined.
For a 3D seabed, the direction and the seabed origin together define a frame of reference, relative to which the seabed data points are specified.
Warning: The depth at the seabed origin is used for all the wave theory calculations so, if the water is shallow and the depth varies, the seabed origin should normally be chosen to be near the main wave-sensitive parts of the model.
Flat seabed
This is the maximum upward slope, in degrees above the horizontal. A flat seabed is modelled as a plane passing through the seabed origin and inclined at this angle in the seabed direction. The model is only applicable to small slopes. OrcaFlex will accept slopes of up to 45° but the model becomes increasingly unrealistic as the slope increases, because the bottom current remains horizontal.
Profile seabed
The profile table defines the seabed shape in the vertical plane through the seabed origin in the seabed direction. The shape is specified by giving the either the seabed Z coordinate relative to global axes, or the depth, at a series of points specified by their distance from seabed origin in the seabed direction (negative values representing points in the opposite direction). If a $Z$ coordinate is entered then the depth is updated to match, and vice versa.
Seabed $Z$ values in between profile points are obtained by interpolation, with a choice of interpolation method. The seabed is assumed to be horizontal. The seabed is assumed to be horizontal in the direction normal to the seabed profile direction and beyond the ends of the table.
Warning: Linear interpolation can cause difficulties for static and dynamic calculations. If you are having problems with static convergence or unstable simulations then you should try one of the other interpolation methods.
Note: You cannot model a true vertical cliff by entering two points with identical distances from seabed origin but differing $Z$ coordinates – the second point will be ignored. You can, however, specify a near-vertical cliff. If you do this, to avoid interpolation overshoot you may need to either specify several extra points just either side of the cliff or use linear interpolation.
The view profile button displays a graph of the seabed profile, showing the specified profile points and the interpolating curve. The seabed is horizontal beyond the ends of the graph.
You should check that the interpolated shape is satisfactory, in particular that the interpolation has not introduced overshoot – i.e. where the interpolated seabed is significantly higher or lower than desired. Overshoot can be solved by adding more profile points in the area concerned and carefully adjusting their coordinates until suitable interpolation is obtained.
3D seabed
The 3D seabed is defined by specifying a set of x, y and Z coordinates of the seabed. The $x$ and $y$ coordinates are given with respect to a right-handed frame of reference with origin at the seabed origin, $Z$ vertically upwards, $x$-axis horizontal in the specified seabed direction and $y$-axis horizontal and normal to that $x$-direction. The $Z$ coordinate is specified relative to the global origin. Equivalently, you may give depth values instead of $Z$ coordinates. If a $Z$ coordinate is entered then the depth is updated to match, and vice versa.
OrcaFlex forms a triangulation of the input data and interpolates this with either the linear or cubic polynomial method. We normally recommend using cubic polynomial: this provides a smooth interpolation which makes both static and dynamic calculations more stable and robust than the linear method.
The linear method has been provided for the special case in which the seabed data are limited to only depth and slope at each line anchor point. The linear interpolation method then allows you to build a seabed which is effectively a number of different flat sloping seabeds for each line.
The minimum edge triangulation angle, $\alpha$, provides a degree of control over the triangulation. Some data sets (for example those which are concave) can result in unusual artefacts around the edges of the data; if this happens, you may find that setting $\alpha$ to a value greater than zero helps. With $\alpha{\gt}0$, triangles at the edge of the triangulation with internal angles less than $\alpha$ are removed. On the other hand, this may lead to significant portions of your triangulated seabed being removed, so unless you see these artefacts we recommend that you choose $\alpha{=}0$.
Note: The seabed generated by OrcaFlex only extends as far as the data given and, at any point outside the horizontal area specified, the sea is considered to be infinitely deep. You must therefore provide data covering the whole area of seabed which might be contacted by any model object.
Seabed model data
Two seabed models are available, an elastic model (which may be linear or nonlinear) and a nonlinear soil model. In summary,
The elastic seabed behaves as a simple elastic spring in directions normal and tangential to the seabed plane. The normal direction stiffness may be defined independently of the stiffness for the tangential directions. The normal stiffness may be linear or nonlinear; tangential stiffness is linear.
The nonlinear soil model is a more sophisticated model of the normal direction seabed resistance. It includes the nonlinear hysteretic behaviour of seabed soil in the normal direction, including modelling of suction effects when a penetrating object rises up. As with the elastic model, tangential stiffness is linear.
The data requirements for these two models are described below.
Elastic model
The elastic model treats the seabed as a simple elastic spring, which may be either linear or nonlinear in both the seabed normal and the seabed shear directions. This gives a seabed normal resistance that is proportional to the penetration, and a seabed lateral resistance that is proportional to the lateral displacement of the contact point (e.g. a node on a line) from its undisturbed position.
In addition, when explicit integration is used the elastic model includes linear damping in the normal and lateral directions, giving extra damping resistance that is proportional to the rate of penetration (for the normal direction) or the rate of lateral movement (for the lateral directions). The linear damper in the normal direction acts only when penetration is increasing and not when it is decreasing, i.e. suction effects are not modelled.
The seabed normal stiffness specifies the properties for the normal spring. To specify a linear stiffness, enter a single stiffness value that is the reaction force that the seabed applies per unit depth of penetration per unit area of contact. For nonlinear stiffness, use variable data to specify a table of reaction force per unit area of contact against depth of penetration.
The seabed shear stiffness is used by the friction calculation. A value of 0 disables friction. A value of '~' indicates that the seabed normal stiffness value is to be used: in the case that the normal stiffness is nonlinear, then the value corresponding to zero penetration is used.
The seabed damping is the constant of proportionality of the damping force, and is a percentage of critical damping. Seabed damping is always zero when using the implicit integration scheme.
Nonlinear soil model
The nonlinear soil model has been developed in collaboration with Prof. Mark Randolph FRS (Centre for Offshore Foundation Systems, University of Western Australia). It builds upon earlier models which used a hyperbolic secant stiffness formulation, such as those proposed by Bridge et al and Aubeny et al, and is documented in Randolph and Quiggin (2009).
The nonlinear soil model is more sophisticated than the elastic model. It models the nonlinear and hysteretic properties of seabed soil in the normal direction, including modelling of suction effects. (In the lateral direction the seabed is modelled in the same way as for the linear elastic model.)
The nonlinear soil model is suited to modelling soft clays and silty clays, and is particularly appropriate for typical deep water seabeds where the mudline undrained shear strength is only a few kPa or less, and where the seabed stiffness response to catenary line contact is dominated by plastic penetration rather than elastic response. This model is not suitable for caprock conditions, and using it to model sand requires very careful choice of soil data and model parameters to represent sand response.
For further details of the model, see the seabed theory and nonlinear soil model theory topics.
Note: For dynamic analysis using implicit integration you might find that you need to use a shorter time step with the nonlinear soil model than with the elastic model.
The data for the nonlinear soil model are divided into three groups:
These specify the undrained shear strength and saturated density of the seabed soil. They should be obtained from geotechnical survey of the site.
The shear strength is determined by the undrained shear strength at mudline, $s_\mathrm{u0}$, and the undrained shear strength gradient, $\rho$. The undrained shear strength at any given penetration distance $z$ is then \begin{equation} s_\mathrm{u}(z) = s_\mathrm{u0} + \rho z \end{equation} The saturated soil density is the density of the seabed soil when fully saturated with sea water. It is used by the nonlinear seabed model to model the extra buoyancy effect that arises when a penetrating object displaces seabed soil.
Site-specific data should be used. Typical saturated soil densities are in the range 1.4 to 1.6 te/m3. Typical deep water sediments have essentially negligible undrained shear strength at mudline (0 to 5 kPa) and an undrained shear strength gradient of 1.3 to 2 kPa/m. Seabed soils are typically stronger in shallow water than in deep water.
Shear stiffness and damping
These specify the strength of the lateral linear spring-damper that is used to model the shear resistance. These data are the same as those described above for the elastic model. The shear damper is only used for explicit integration; for implicit integration the shear damper strength is zero.
The shear stiffness can be given as the default value '~', to get OrcaFlex to calculate a value based on the soil shear strength data given by the soil properties. The formula used is \begin{equation} \text{Shear stiffness} = \frac{20}{d}\ (s_\mathrm{u0} + \rho\ \tfrac12 d) \end{equation} where $d$ is the contact diameter of the penetrating object; the term in brackets is the soil undrained shear strength at a penetration depth of $d/2$.
Seabed soil model parameters
These parameters appear on a separate page on the environment data form. They are non-dimensional parameters that control how the seabed soil is modelled; their use is detailed under the nonlinear soil model theory topic. | CommonCrawl |
\begin{document}
\hbadness=99999
\begin{abstract} We consider a discrete random walk on a diagonal lattice in two and three dimensions and obtain explicit solutions of absorption probabilities and probabilities of return in several domains. In three dimensions we consider both the cube and the dodecahedron variant. In two dimensions we obtain explicit formula in case of rotated barriers. \end{abstract}
\title{Random walk on a diagonal lattice}
\section{Introduction} Discrete random walks are studied in a number of standard books, see e.g. Spitzer~\cite{SP} and Feller~\cite{FE}. Polya~\cite{PO} was the first to observe that a walker is certain to return to his starting position in one and two dimensional symmetric discrete random walks while there exists a positive escape probability in higher dimensions. McCrea and Whipple~\cite{MC} study simple symmetric random paths in two and three dimensions, starting in a rectangular lattice on the integers with absorbing barriers on the boundaries. After taking limits they obtain probabilities of absorption in two and three dimensional lattices. Bachelor and Henry~\cite{BA1}~\cite{BA2} use the McCrea-Whipple approach and find the exact solution for random walks in the triangular lattice with absorbing boundaries and for random walks on finite lattice tubes. In this paper we study random walks on a diagonal lattice.
\section{Random walk on a diagonal lattice in two dimensions with absorbing boundaries}
\subsection{Rectangular region}
We define an interior $I$ of a rectangular region: $I=\{(p,q)| 1\leq p\leq m, 1\leq q\leq n\}$ The boundary of this region is $B$, which consist of absorbing barriers. We define $F_{(a,b)}(p,q)$ as the expected number of departures from $(p,q)$ when starting in the interior source $(a,b)$ on a diagonal lattice. We’ll often use the abbreviation $F(p,q)$. We study a diagonal lattice, so we have for $I$: \begin{multline} \label{eq:one} F(p,q)=\delta_{a,p} \delta_{b,q} +\\ \frac{1}{4}\{F(p+1,q+1)+F(p+1,q-1)+F(p-1,q+1)+F(p-1,q-1)\} \end{multline} and for $B$:
\begin{equation} \label{eq:two} F(p,q)=0 \end{equation} The homogeneous part of the difference equation~\eqref{eq:one} has solutions \(F(p,q)= Ae^{ip\alpha+q\beta}\), where $\cos\alpha\cosh\beta=1$, so $F(p,q)=C\sin{\alpha p}\sinh{\beta q}.$ \\ We can construct solutions of~\eqref{eq:one} and~\eqref{eq:two}: \[ F_1(p,q)=\sum_{r=1}^{m} C(r)\sin\frac{pr\pi}{m+1}\sinh q\beta_r\sinh[(n+1-b)\beta_r] \quad (q\leq b) \]
\[ F_2(p,q)=\sum_{r=1}^{m} C(r)\sin\frac{pr\pi}{m+1}\sinh b\beta_r\sinh[(n+1-q)\beta_r] \quad (q\geq b) \] where \[\cos \frac{r\pi}{m+1}\cosh \beta_r=1\] We substitute these solutions in~\eqref{eq:one} with $q=b$ and get: \begin{multline*}
\sum_{r=1}^{m} C(r)\sin\frac{pr\pi}{m+1}\{\sinh b\beta_r\sinh(n+1-b)\beta_r- \\ \frac{1}{2}\cos\frac{r\pi}{m+1}[\sinh b\beta_r\sinh(n-b)\beta_r+\sinh (b-1)\beta_r\sinh(n+1-b)\beta_r]\}=\delta_{a,p}
\end{multline*}
Using \(\cos \frac{r\pi}{m+1}\cosh \beta_r=1\) we get after some calculations:
\begin{equation*}
\sum_{r=1}^{m} C(r)\sin\frac{pr\pi}{m+1}\{ \frac{1}{2}\cos\frac{r\pi}{m+1}\sinh \beta_r\sinh[(n+1)\beta_r]\}=\delta_{a,p}
\end{equation*}
Using
\begin{equation*}
\frac{2}{m+1}\sum_{r=1}^{m} \sin\frac{ar\pi}{m+1}\sin\frac{pr\pi}{m+1}=\delta_{a,p}
\end{equation*}
we get:
\[ F_1(p,q)= \frac{4}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{ar\pi}{m+1} \sin\frac{pr\pi}{m+1} \sinh q\beta_r\sinh[(n+1-b)\beta_r]}{\tanh \beta_r\sinh[(n+1)\beta_r]} \quad (q\leq b) \]
\[ F_2(p,q)= \frac{4}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{ar\pi}{m+1} \sin\frac{pr\pi}{m+1} \sinh b\beta_r\sinh[(n+1-q)\beta_r]}{\tanh \beta_r\sinh[(n+1)\beta_r]} \quad (q\geq b) \] where \[\cos \frac{r\pi}{m+1}\cosh \beta_r=1\] Remark. When m is odd we have a problem in $r=\frac{m+1}{2}$. We can change the roles of $p$ and $q$ in our solutions, but when both $m$ and $n$ are odd, our method doesn't work.\\ We obtain absorption probabilities in elements of B by observing (diagonal) neighbors in the interior region. Let $A(p,q)$ be the probability of absorption in $(p,q)\in B$. Then we have for example: $A(m+1,n+1)=\frac{1}{4}{F_2(m,n)},\quad A(m+1,n)=\frac{1}{4}{F_2(m,n-1)},\quad A(m+1,n-1)=\frac{1}{4}[F_2(m,n)+F_2(m,n-2)]$. \subsection{Semi infinite strip} By taking $n \rightarrow \infty$ in the rectangular solution we get: \[ F_1(p,q)= \frac{4}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{ar\pi}{m+1} \sin\frac{pr\pi}{m+1} \sinh q\beta_r \exp{(-b\beta_r)}}{\tanh \beta_r} \quad (q\leq b) \]
\[ F_2(p,q)= \frac{4}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{ar\pi}{m+1} \sin\frac{pr\pi}{m+1} \sinh b\beta_r \exp{(-q\beta_r)}}{\tanh \beta_r} \quad (q\geq b) \] where \[\cos \frac{r\pi}{m+1}\cosh \beta_r=1\] \subsection{Infinite strip} By taking $b,q \rightarrow \infty$ , $q-b=s$ finite in the solution of the semi infinite strip, we get: \[ F_{(a,0)}(p,s)= \frac{2}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{ar\pi}{m+1} \sin\frac{pr\pi}{m+1} \exp{(-\lvert s \lvert\beta_r)}}{\tanh \beta_r} \] where \[\cos \frac{r\pi}{m+1}\cosh \beta_r=1\]
\subsection{Infinite Quadrant} By letting $m,n\rightarrow \infty$ in the block solution, we get the infinite quadrant $p,q>0$. \[ F_1(p,q)= \frac{8}{\pi}\int_{0}^{\pi} \frac{ \sin{a \lambda} \sin{p \lambda} \sinh{q \mu} \exp{(-b \mu)}}{\tanh \mu} d\lambda \quad (q\leq b) \] \[ F_2(p,q)= \frac{8}{\pi}\int_{0}^{\pi} \frac{ \sin{a \lambda} \sin{p \lambda} \sinh{b \mu} \exp{(-b \mu)}}{\tanh \mu} d\lambda \quad (q\geq b) \] where \begin{equation*} \cos {\lambda}\cosh {\mu}=1 \end{equation*} \subsection{Half-plane}
By taking $m \rightarrow \infty$ ,in the solution of the infinite strip, we get: \begin{equation} \label{eq:three} F_{(a,0)}(p,s)= \frac{2}{\pi}\int_{0}^{\pi} \frac{ \sin{a \lambda} \sin{p \lambda} \exp{(-\lvert s \lvert\mu)}}{\tanh \mu} d\lambda \end{equation} where \begin{equation} \label{eq:four} \cos {\lambda}\cosh {\mu}=1 \end{equation} We prove this is the unique solution in the half-plane. First we prove that it is a solution: If $\lvert s \lvert\geq 1$ then substitute \eqref{eq:three} in \eqref{eq:one} and use \eqref{eq:four} . If $s=0$ then we again substitute \eqref{eq:three} in \eqref{eq:one} and get, using \eqref{eq:four} with starting point $(a,0)$: \[ 4F(p,0)-F(p+1,1)-F(p+1,-1)-F(p-1,1)-F(p-1,-1)= \] \[ \frac{2}{\pi}\int_{0}^{\pi} \frac{\sin{a\lambda}\left\{ 4\sin{p\lambda}-2 \left[\sin{(p+1)\lambda}+\sin{(p-1)\lambda} \right] {\rm e}^{-\mu}\right\}}{\tanh \mu}d\lambda= \] \[ \frac{2}{\pi}\int_{0}^{\pi} \frac{\sin{a\lambda}\sin{p\lambda}\left[ 4-4\cos\lambda {\rm e}^{-\mu}\right]}{\tanh \mu}d\lambda=\frac{8}{\pi}\int_{0}^{\pi} \sin{a\lambda}\sin{p\lambda}d\lambda=4 \delta_{a,p} \] The solution is unique: see Feller [6], (p.362)
\section{Random walk on a diagonal lattice in three dimensions}
\subsection{Block}
The interior is now defined by: $I=\{(p,q,r)| 1\leq p \leq l ,1 \leq q \leq m, 1\leq r\leq n\}$ The boundary of this region is $B$, which consist of absorbing barriers. We define $F_{(a,b,c)}(p,q,r)$ as the expected number of departures from $(p,q,r)$ when starting in the interior source $(a,b,c)$ on a diagonal lattice. We’ll often use the abbreviation $F(p,q,r)$. We study diagonal lattices. In three dimensions this can be realized in two ways: cube and dodecahedron. \\ \textit{We start with the cube model:}
\begin{multline} \label{eq:five} F(p,q,r)=\delta_{a,p} \delta_{b,q} \delta_{c,r} +
\frac{1}{8}\{F(p+1,q+1,r+1)+F(p+1,q+1,r-1)+ \\F(p+1,q-1,r+1)+F(p+1,q-1,r-1)+F(p-1,q+1,r+1)+\\F(p-1,q+1,r-1)+F(p-1,q-1,r+1)+F(p-1,q-1,r+1)\} \end{multline} and for $B$: \begin{equation*} F(p,q,r)=0 \end{equation*} The homogeneous part of the difference equation~\eqref{eq:five} has solutions \[(p,q,r)= Ae^{ip\alpha_1+iq\alpha_2+r\beta}\], where $\cos\alpha_1 \cos\alpha_2\cosh\beta=1$, so we have solutions
\[F(p,q,r)= C\sin{\alpha_1 p}\sin{\alpha_2 q}\sinh{\beta r}. \] Analogue to the 2-dimensional case we find \begin{multline*} F_1(p,q,r)= \frac{8}{(l+1)(m+1)} \\ \sum_{s=1}^{l}\sum_{t=1}^{m} \frac{ \sin\frac{as\pi}{l+1} \sin\frac{ps\pi}{l+1}\sin\frac{bt\pi}{m+1} \sin\frac{qt\pi}{m+1} \sinh r\beta_{st}\sinh[(n+1-c)\beta_{st}]}{\tanh \beta_{st}\sinh[(n+1)\beta_{st}]} \ (r\leq c) \end{multline*} \begin{multline*} F_2(p,q,r)= \frac{8}{(l+1)(m+1)} \\ \sum_{s=1}^{l}\sum_{t=1}^{m} \frac{ \sin\frac{as\pi}{l+1} \sin\frac{ps\pi}{l+1}\sin\frac{bt\pi}{m+1} \sin\frac{qt\pi}{m+1} \sinh c\beta_{st}\sinh[(n+1-r)\beta_{st}]}{\tanh \beta_{st}\sinh[(n+1)\beta_{st}]} \ (r\geq c) \end{multline*} where \[\cos \frac{s\pi}{l+1}\cos \frac{t\pi}{m+1}\cosh \beta_{st}=1\] \\ The next model is the \textit{dodecahedron} case: \begin{multline} \label{eq:six} F(p,q,r)=\delta_{a,p} \delta_{b,q} \delta_{c,r} + \\
\frac{1}{12}\{F(p+1,q+1,r)+F(p+1,q-1,r)+ F(p-1,q+1,r)+F(p-1,q-1,r)+\\F(p+1,q,r+1)+F(p+1,q,r-1)+F(p-1,q+1,r)+F(p-1,q-1,r)+\\F(p,q+1,r+1)+F(p,q+1,r-1)+F(p,q-1,r+1)+F(p,q-1,r-1)
\}
\end{multline}
The homogeneous part of the difference equation~\eqref{eq:six} has solutions:\\ \(F(p,q,r)= Ae^{ip\alpha_1+iq\alpha_2+r\beta}\), where $\cos\alpha_1 \cos\alpha_2+(\cos\alpha_1 +\cos\alpha_2)\cosh\beta=3$.\\ The solutions for dodecahedron case are the same as for the cube case except of the definition of $\beta_{st}$: \[\cos \frac{s\pi}{l+1}\cos \frac{t\pi}{m+1} +(\cos \frac{s\pi}{l+1}+\cos \frac{t\pi}{m+1})\cosh \beta_{st}=3\] \subsection{Three dimensional diagonal lattice} The solution in a 3-dimensional lattice can be obtained by taking $l,m,n,a,b,c,p,q,r \rightarrow \infty$ in the block solution with $ p-a=u, q-b=v, r-c=w $ finite:
\[F_{(0,0,0)}(u,v,w)=\frac{1}{\pi^2}\int_{0}^{\pi} \int_{0}^{\pi} \frac{\cos u \lambda \cos v\mu \exp{(- \left|w \right|\theta)}}{\tanh \theta}d\lambda d\mu \] where in the cube model we have:
\[cos\lambda \cos\mu \cosh\theta=1 \] and in the dodecahedron model we have:
\[cos\lambda \cos\mu + (\cos\lambda +\cos\mu)\cosh\theta=3 \] \subsection{Probability of return in 3-dimensional diagonal lattice} A well known result in case of simple random walk in 3 dimensions is that the probability of return to the starting point is approximately $0.34$; see e.g. McCrea and Whipple [3].\\ We first focus on the probability of return in the diagonal \textit{cube} case. The probability is $1-\frac{1}{F}$ where
\[ F_{(0,0,0)}(0,0,0)=\frac{1}{\pi^2}\int_{0}^{\pi} \int_{0}^{\pi} \frac{1}{\tanh \theta}d\lambda d\mu \] where \[\cos\lambda \cos\mu \cosh\theta=1\] \\ Using numerical integration, we find \[ F_{(0,0,0)}(0,0,0)=\frac{1}{\pi^2}\int_{0}^{\pi} \int_{0}^{\pi} (1-\cos^{2}\lambda \cos^{2}\mu)^{-0.5}d\lambda d\mu \approx 1.3932 \] In the diagonal cube case we have probability of return $1-\frac{1}{F} \approx 0.2822$ \\
Montroll~\cite{MO} uses a different approach. He observes that many crystals appear as body centered lattices. The body centered lattice is composed of two interpenetrating simple cubic lattices with the points of one lattice being at the center of the cubes of the other lattice. The walker moves to one of its eight neighboring lattice points in the other lattice. He finds the probability of return to the starting point in the diagonal case:$1-\frac{1}{u}\approx.282229985$ where $ u=\frac{1}{\pi^3}\int_0^\pi \int_0^\pi \int_0^\pi(1-\cos{x} \cos{y} \cos{z})^{-1} dxdydz \approx 1.3932039297 $ \\ We now focus on the probability of return in the diagonal \textit{dodecahedron} case.
The probability is $1-\frac{1}{F}$ where
\[ F_{(0,0,0)}(0,0,0)=\frac{1}{\pi^2}\int_{0}^{\pi} \int_{0}^{\pi} \frac{1}{\tanh \theta}d\lambda d\mu \] and \[\cos\lambda \cos\mu + (\cos\lambda +\cos\mu)\cosh\theta=3\] \\ Using numerical integration, we find
\[ F_{(0,0,0)}(0,0,0)=\frac{1}{\pi^2}\int_{0}^{\pi} \int_{0}^{\pi} [1-(\frac{\cos\lambda +\cos\mu}{3-\cos\lambda \cos\mu})^2]^{-0.5} d\lambda d\mu \approx 1.2298 \] In the diagonal dodecahedron case we have probability of return $1-\frac{1}{F} \approx 0.1868$ \\
\section{Transformations in two dimensions} We can transform the diagonal random walk to a simple one by first shrinking with factor $\frac{1}{\sqrt{2}}$ and then a rotation around the origin with angle $\pi /4$. We get: $(p,q)\rightarrow (\frac{p}{\sqrt{2}},\frac{q}{\sqrt{2}}) \rightarrow (\frac{p-q}{2},\frac{p+q}{2}).$ Let $(x,y)$ be our new coordinate system, then we have: $p=y+x, q=y-x$. We use this transformation to get the desired simple random walk, but now with rotated boundaries.
\subsection{Transformed Rectangular Region}
$I=\{(x,y)| 1\leq y+x\leq m, 1\leq y-x\leq n\}$.\\
Our original starting point $(a,b)$ is transformed in $(\frac{a-b}{2},\frac{a+b}{2})$.\\
When starting in $(\frac{a-b}{2},\frac{a+b}{2})$ we get
\[ F_1(x,y)= \frac{4}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{ar\pi}{m+1} \sin\frac{(y+x)r\pi}{m+1} \sinh [(y-x)\beta_r]\sinh[(n+1-b)\beta_r]}{\tanh \beta_r\sinh[(n+1)\beta_r]} \] where $y-x\leq b$. We prefer to start in $(a,b)$ and then we get: \begin{multline*} F_1(x,y)=\\ \frac{4}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{(a+b)r\pi}{m+1} \sin\frac{(y+x)r\pi}{m+1} \sinh [(y-x)\beta_r]\sinh[(n+1+a-b)\beta_r]}{\tanh \beta_r\sinh[(n+1)\beta_r]} \end{multline*}
\begin{multline*} F_2(x,y)= \\ \frac{4}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{(a+b)r\pi}{m+1} \sin\frac{(y+x)r\pi}{m+1} \sinh {[(b-a)\beta_r]} \sinh[(n+1+x-y)\beta_r]}{\tanh \beta_r\sinh[(n+1)\beta_r]} \end{multline*} where $F_1$ is valid for $y-x \leq b-a$ and $F_2$ is valid for $y-x \geq b-a$ and \[\cos \frac{r\pi}{m+1}\cosh \beta_r=1\] \subsection{Transformed Semi infinite strip}
$I=\{(x,y)| 1\leq y+x\leq m, 1\leq y-x\}$; we start in $(a,b)$.
\[ F_1(x,y)= \frac{4}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{(a+b)r\pi}{m+1} \sin\frac{(y+x)r\pi}{m+1} \sinh [(y-x)\beta_r] \exp[(a-b) \beta_r]}{\tanh \beta_r} \] \[ F_2(x,y)= \frac{4}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{(a+b)r\pi}{m+1} \sin\frac{(y+x)r\pi}{m+1} \sinh [(b-a)\beta_r] \exp[(x-y) \beta_r]}{\tanh \beta_r} \] where $F_1$ is valid for $y-x \leq b-a$ and $F_2$ is valid for $y-x \geq b-a$ and \[\cos \frac{r\pi}{m+1}\cosh \beta_r=1\]
\subsection{Transformed Infinite strip} Rotating and shrinking the solution of the semi infinite strip gives, when starting in $(a,a)$: \[ F(p,s)= \frac{2}{m+1}\sum_{r=1}^{m} \frac{ \sin\frac{2ar\pi}{m+1} \sin\frac{(p+s)r\pi}{m+1} \exp(-\lvert s \lvert\beta_r)}{\tanh \beta_r} \quad (1\leq p+s\leq m) \] where \[\cos \frac{r\pi}{m+1}\cosh \beta_r=1\]
\subsection{Transformed Infinite Quadrant}
$I=\{(x,y)| 1\leq y+x, 1\leq y-x\}$; we start in $(a,b)$. \[ F_1(x,y)= \frac{8}{\pi}\int_{0}^{\pi} \frac{ \sin{[(a+b) \lambda]} \sin{[(y+x) \lambda]} \sinh{[(y-x) \mu]} \exp[(a-b) \mu]}{\tanh \mu} d\lambda \] \[ F_2(x,y)= \frac{8}{\pi}\int_{0}^{\pi} \frac{ \sin{[(a+b) \lambda]} \sin{[(y+x) \lambda]} \sinh{[(b-a) \mu]} \exp[(x-y) \mu]}{\tanh \mu} d\lambda \] where $F_1$ is valid for $y-x\leq b-a$ and $F_2$ is valid for $y-x\geq b-a$ and \begin{equation*} \cos {\lambda}\cosh {\mu}=1 \end{equation*} \subsection{Transformed Half-plane} By taking $m \rightarrow \infty$ in the solution of the infinite strip, we get when starting in $(a,a)$: \begin{equation*} F(p,s)= \frac{2}{\pi}\int_{0}^{\pi} \frac{\sin{(2a \lambda)} \sin{[(p+s) \lambda]} \exp(-\lvert s \lvert\mu)}{\tanh \mu} d\lambda \quad (1\leq p+s) \end{equation*} where \begin{equation*} \cos {\lambda}\cosh {\mu}=1 \end{equation*}
\end{document} | arXiv |
\begin{document}
\title[]{A remark on the permutation representations afforded by the embeddings of $\mathrm{O}_{2m}^\pm(2^f)$ in $\mathrm{Sp}_{2m}(2^f)$}
\author[S.~Guest]{Simon Guest} \address{Simon Guest, Mathematics, University of Southampton, Highfield, SO17 1BJ, United Kingdom}\email{[email protected]}
\author[A.~Previtali]{Andrea Previtali}
\author[P.~Spiga]{Pablo Spiga} \address{ Dipartimento di Matematica e Applicazioni, University of Milano-Bicocca,\newline Via Cozzi 53, 20125 Milano, Italy}\email{[email protected], [email protected]}
\thanks{Address correspondence to P. Spiga, E-mail: [email protected]}
\subjclass[2000]{20B15, 20H30} \keywords{permutation groups; permutation character}
\begin{abstract} We show that the permutation module over $\mathbb{C}$ afforded by the action of $\mathrm{Sp}_{2m}(2^f)$ on its natural module is isomorphic to the permutation module over $\mathbb{C}$ afforded by the action of $\mathop{\mathrm{Sp}}_{2m}(2^f)$ on the union of the right cosets of $\mathrm{O}_{2m}^+(2^f)$ and $\mathrm{O}_{2m}^-(2^f)$. \end{abstract} \maketitle
\section{Introduction}\label{introduction} That a given finite group can have rather different permutation representations affording the same permutation character was shown by Helmut Wielandt in~$1979$. For instance, the actions of the projective general linear group $\mathop{\mathrm{PGL}}_d(q)$ on the projective points and on the projective hyperplanes afford the same permutation character, but these actions are not equivalent when $d\geq 3$. A more interesting example is offered by the Mathieu group $M_{23}$. Here we have two primitive permutation representations of degree $253$ affording the same permutation character, but with non-isomorphic point stabilizers (see~\cite[p.~$71$]{ATLAS}).
Establishing which properties are shared by permutation representations of a finite group $G$ with the same permutation character has been the subject of considerable interest. For instance, it was conjectured by Wielandt~\cite{W} that, if $G$ admits two permutation representations on $\Omega_1$ and $\Omega_2$ that afford the same permutation character, and if $G$ acts primitively on $\Omega_1$, then $G$ acts primitively on $\Omega_2$. This conjecture was first reduced to the case that $G$ is almost simple by F\"{o}rster and Kov\'acs~\cite{FK} and then it was solved (in the negative) by Guralnick and Saxl~\cite{GS}. Some more recent investigations on primitive permutation representations and their permutation characters can be found in~\cite{P}.
In this paper we construct two considerably different permutation representations of the symplectic group that afford the same permutation character. We let $q$ be a power of $2$, $G$ be the finite symplectic group $\mathop{\mathrm{Sp}}_{2m}(q)$, $V$ be the $2m$-dimensional natural module for $\mathop{\mathrm{Sp}}_{2m}(q)$ over the field $\mathbb{F}_q$ of $q$ elements, and $\pi$ be the complex permutation character for the action (by matrix multiplication) of $G$ on $V$. Since $q$ is even, the orthogonal groups $\mathrm{O}_{2m}^+(q)$ and $\mathrm{O}_{2m}^-(q)$ are maximal subgroups of $G$ (see~\cite{D}). For $\varepsilon\in \{+,-\}$, we let $\Omega^\varepsilon$ denote the set of right cosets of $\mathrm{O}_{2m}^\varepsilon(q)$ in $G$, and we let $\pi^\varepsilon$ denote the permutation character for the action of $G$ on $\Omega^\varepsilon$.
\begin{theorem}\label{thrm} The $\mathbb{C}G$-modules $\mathbb{C}V$ and $\mathbb{C}\Omega^+\oplus\mathbb{C}\Omega^-$ are isomorphic. That is, $\pi=\pi^++\pi^-$. \end{theorem} We find this behaviour quite peculiar considering that the $G$-sets $V$ and $\Omega^+\cup \Omega^-$ are rather different. For instance, $G$ has two orbits of size $1$ and $q^{2m}-1$ on $V$, and has two orbits of size $q^m(q^m+1)/2$ and $q^{m}(q^m-1)/2$ on $\Omega^+\cup \Omega^-$. Moreover, the action of $G$ on both $\Omega^+$ and $\Omega^-$ is primitive, but the action of $G$ on $V\setminus \{0\}$ is not when $q>2$.
\section{Proof of Theorem~\ref{thrm}}\label{sec1}
Inglis~\cite[Theorem~$1$]{I} shows that the orbitals of the two orthogonal subgroups are self-paired, hence the characters $\pi^+$ and $\pi^-$ are multiplicity-free (see \cite[\S 2.7]{C}). We will use this fact in our proof of Theorem~\ref{thrm}.
\begin{proof}[Proof of Theorem~\ref{thrm}] Let $1$ denote the principal character of $G$. Observe that $\pi=1+\pi_0$, where $\pi_0$ is the permutation character for the transitive action of $G$ on $V\setminus\{0\}$. In particular, for $v\in V\setminus\{0\}$, we have $\pi_0=1_{G_v}^G$, where $G_v$ is the stabilizer of $v$ in $G$. Frobenius reciprocity implies that $\langle\pi_0,\pi_0\rangle=\langle \pi_0|_{G_v},1\rangle$, and this equals the number of orbits of $G_v$ on $V\setminus\{0\}$. We claim that $G_v$ has $2q-1$ orbits on $V\setminus\{0\}$. More precisely we show that, given $w\in v^\perp\setminus\langle v\rangle$ and $w'\in V\setminus v^\perp$, the elements $\lambda v$ (for $\lambda\in \mathbb{F}_q\setminus\{0\}$), $w$, and $\lambda w'$ (for $\lambda\in\mathbb{F}_q\setminus\{0\}$) are representatives for the orbits of $G_v$ on $V\setminus\{0\}$. Since $G_v$ fixes $v$ and preserves the bilinear form $(\,,\,)$, these elements are in distinct $G_v$-orbits. Let $u\in V\setminus\{0\}$. If $u\in \langle v\rangle$, then $u=\lambda v$ for some $\lambda\neq 0$, and hence there is nothing to prove. Let $w_0=w$ if $(v,u)=0$, and let $w_0=\frac{(v,u)}{(v,w')} w'$ if $(v,u)\neq 0$. By construction, the $2$-spaces $\langle v,u\rangle$ and $\langle v,w_0\rangle$ are isometric and they admit an isometry $f$ such that $v^f=v$ and $u^f=w_0$. By Witt's Lemma~\cite[Proposition~$2.1.6$]{KL}, $f$ extends to an isometry $g$ of $V$. Thus $g\in G_v$ and $u^g=w_0$, which proves our claim. Therefore, we have \begin{equation}\label{eq1} \langle \pi_0,\pi_0\rangle=2q-1. \end{equation}
Next we need to refine the information in~\eqref{eq1}. Let $P$ be the stabilizer of the $1$-subspace $\langle v\rangle$ in $G$. Then $P$ is a maximal parabolic subgroup of $G$ and $P/G_v$ is cyclic of order $q-1$. Write $\eta=1_{G_v}^P$ and observe that $\eta=\sum_{\zeta\in \mathop{\mathrm{Irr}}(P/G_v)}\zeta$, where by abuse of terminology we identify the characters of $P/G_v$ with the characters of $P$ containing $G_v$ in the kernel. Thus \[\pi_0=1_{G_v}^G=(1_{G_v}^P)_P^G=\eta_P^G=\sum_{\zeta\in \mathop{\mathrm{Irr}}(P/G_v)}\zeta_P^G.\] Since every character of $G$ is real-valued~\cite{Gow}, we must have \[(\overline{\zeta})_P^G=\overline{\zeta_P^G}=\zeta_P^G,\]
where $\overline{x}$ denotes the complex conjugate of $x \in \mathbb{C}$. Let $\mathcal{S}$ be a set of representatives, up-to-complex conjugation, of the non-trivial characters of $\mathop{\mathrm{Irr}}(P/G_v)$. Since $|P/G_v|=q-1$ is odd, we see that $|\mathcal{S}|=q/2-1$. We have \[\pi_0=1_P^G+2\sum_{\zeta\in \mathcal{S}}\zeta_P^G.\] If we write $\pi'=\sum_{\zeta\in\mathcal{S}}\zeta_P^G$, then we have $\pi_0=1_P^G+2\pi'$.
Since $1_P^G$ is the permutation character of the rank $3$ action of $G$ on the $1$-dimensional subspaces of $V$, we have $1_P^G=1+\chi^++\chi^-$ for some distinct non-trivial irreducible characters $\chi^+$ and $\chi^-$ of $G$. Let $\Gamma$ be the graph with vertex set the $1$-subspaces of $V$ and edge sets $\{\langle v\rangle, \langle w\rangle\}$ whenever $v\perp w$. Observe that $\Gamma$ is strongly regular with parameters \[\left(\frac{q^{2m}-1}{q-1}, \frac{q^{2m-1}-q}{q-1},\frac{q^{2m-2}-1}{q-1}-2, \frac{q^{2m-2}-1}{q-1}\right).\] Hence the eigenvalues of $\Gamma$ have multiplicity $\frac{1}{2}\left(\frac{q^{2m}-q}{q-1}-q^m\right)$ and $\frac{1}{2}\left(\frac{q^{2m}-q}{q-1}+q^m\right)$ (see \cite[p. 27]{CV}).
Interchanging the roles of $\chi^+$ and $\chi^-$ if necessary, we may assume that $\chi^-(1)<\chi^+(1)$. The above direct computation proves that \begin{equation}\label{new} \chi^-(1)=\frac{1}{2}\left(\frac{q^{2m}-q}{q-1}-q^m\right)\quad \textrm{and}\quad\chi^+(1)=\frac{1}{2}\left(\frac{q^{2m}-q}{q-1}+q^m\right) \end{equation} (compare \cite[Section~$1$]{L}).
Fix $\zeta\in \mathcal{S}$. We claim that $\zeta_P^G$ is irreducible. From Mackey's irreducibility Criterion~\cite[Proposition~23, Section~$7.3$]{Serre}, we need to show that for every $s\in G\setminus P$, we have $\zeta_{sPs^{-1}\cap P}\neq \zeta^s$, where $\zeta^s$ is the character of $sPs^{-1}\cap P$ defined by $(\zeta^s)(x)=\zeta(s^{-1}xs)$ and, as usual, $\zeta_{sPs^{-1}\cap P}$ is the restriction of $\zeta$ to $sPs^{-1}\cap P$. Fix a monomorphism $\psi$ from $P/P'$ into $\mathbb C^*$. Since $\zeta$ is a class function of $P$, we need to consider only elements $s$ in distinct $(P,P)$-double cosets. These correspond to the $P$-orbits $\langle v\rangle$, $v^\perp\setminus\langle v\rangle$ and $V\setminus v^\perp$. Let $H=\langle v,u\rangle$ be a hyperbolic plane and choose $s\in G$ such that \[vs=u,\quad us=u\quad\textrm{and}\quad s_{H^\perp}=1_{H^\perp}.\] A calculation shows that $\zeta^s(x)=\psi(\mu^{-1})=\overline{\zeta(x)}$, where $vx=\mu v$. Since $q-1$ is odd, we have $\zeta(x)\ne\zeta^s(x)$ when $\mu\ne1$. Therefore $\zeta\neq \zeta^s$. Finally choose $s\in G$ such that $(v,u,w,z)s=(w,z,v,u)$, where $H=\langle v,u\rangle\perp \langle w,z\rangle$ is an orthogonal sum of hyperbolic planes and $s_{H^\perp}=1_{H^\perp}$. Another calculation shows that $\zeta^s(x)=\psi(\lambda)$ and $\zeta(x)=\psi(\mu)$, where $vx=\mu v$ and $wx=\lambda$. If $\mu\ne\lambda$, then $\zeta^s(x)\ne\zeta(x)$ and hence $\zeta^s\ne\zeta$. Our claim is now proved.
Write $\pi'=\sum_{i=1}^\ell m_i\chi_i$ as a linear combination of the distinct irreducible constituents of $\pi'$. Observe that, by the previous paragraph, each $\chi_i$ is of the form $\zeta_P^G$, for some $\zeta\in \mathcal{S}$. Therefore $\chi_i$ has degree $|G:P|$ for each $i$ and, in particular, $1$, $\chi^+$ and $\chi^-$ are not irreducible constituents of $\pi'$.
The number of irreducible constituents of $\pi_0$ is
\[1+1+1+2(m_1+\cdots +m_\ell)= 3+2|\mathcal{S}|=3+2\left(\frac{q}{2}-1\right)=q+1,\] and by~\eqref{eq1} we have \[3+4m_1^2+\cdots+4m_\ell^2=2q-1.\] Multiplying the first equation by $-2$ and adding the second equation we have
\[-3+4m_1(m_1-1)+\cdots+4m_\ell(m_\ell-1)=-3.\] It follows that $m_1=\cdots =m_\ell=1$, and hence $\ell=q/2-1$. This shows that $\pi'$ is multiplicity-free.
Summing up, we have \begin{equation}\label{eq11} \pi_0=1+\chi^++\chi^-+2\pi',\quad\langle\pi',\pi'\rangle=\frac{q}{2}-1,\quad\langle 1+\chi^++\chi^-,\pi'\rangle=0. \end{equation}
We now turn our attention to the characters $\pi^+$ and $\pi^-$. By Frobenius reciprocity, or by~\cite[Theorem~$1$~(i) and~(ii)]{I}, we see that \begin{equation}\label{eq2} \langle \pi^+,\pi^+\rangle=\langle \pi^-,\pi^-\rangle=\frac{q}{2}+1. \end{equation}
By~\cite[Lemma~$2$ (iii) and (iv)]{I}, the orbits of $\mathrm{O}_{2m}^-(q)$ in its action on $\Omega^+$ are in one-to-one correspondence with the elements in $\{\alpha+\alpha^2\mid \alpha\in \mathbb{F}_q\}$. In particular, we have $\langle \pi^+|_{\mathrm{O}_{2m}^-(q)},1\rangle=|\{\alpha+\alpha^2\mid \alpha\in \mathbb{F}_q\}|=q/2$. Now Frobenius reciprocity implies that \begin{equation}\label{eq3} \langle \pi^+,\pi^-\rangle=\frac{q}{2}. \end{equation}
Next we show that \begin{equation}\label{eq4} \langle \pi_0,\pi^+\rangle=\langle\pi_0,\pi^-\rangle=q. \end{equation} Using Frobenius reciprocity, it suffices to show that the number of orbits of $\mathrm{O}_{2m}^\pm(q)$ on $V\setminus\{0\}$ is $q$. Fix $\varepsilon\in \{+,-\}$ and let $Q^\varepsilon~$ be the quadratic form on $V$ preserved by $\mathrm{O}_{2m}^\varepsilon(q)$. For $\lambda\in \mathbb{F}_q$, we see from~\cite[Lemma~$2.10.5$~(ii)]{KL} that $\Omega_{2m}^\varepsilon(q)$ is transitive on $V_\lambda^\varepsilon=\{v\in V\setminus\{0\}\mid Q^\varepsilon(v)=\lambda\}$. In particular, $\{V_\lambda^\varepsilon \mid \lambda\in \mathbb{F}_q \}$ is the set of orbits of $\Omega_{2m}^\varepsilon(q)$ on $V\setminus\{0\}$. Since $\mathrm{O}_{2m}^\varepsilon(q)$ is the isometry group of $Q^\varepsilon$ we see that $\{V^{\epsilon}_\lambda \mid \lambda \in \mathbb{F}_{q}\}$ is also the set of orbits of $\mathrm{O}_{2m}^\varepsilon(q)$ on $V\setminus\{0\}$, and~\eqref{eq4} is now proved.
Since $\pi^+$ is multiplicity-free, up to reordering, by~\eqref{eq11} and~\eqref{eq4}, we may assume that \begin{equation*} \pi^+=1+a\chi^-+b\chi^++\sum_{i=1}^t\chi_i+\rho, \end{equation*}
where $a,b\in \{0,1\}$, $0\le t\le\frac q2-1$ and $\langle \pi_0,\rho\rangle=0$. By ~\eqref{eq4}, we have $q-2\ge 2t=q-1-a-b\ge q-3$. Hence $2t=q-2$ and $\{a,b\}=\{0,1\}$. Since $\pi^+(1)=|\Omega^+|=q^m(q^m+1)/2$ and $\pi'(1)=(q/2-1)|G:P|=(q/2-1)(q^{2m}-1)/(q-1)$, it follows by~\eqref{new} that $a=0$ and $b=1$. By~\eqref{eq2}, we have $\pi^+=1+\chi^++\pi'$.
Now~\eqref{eq11},~\eqref{eq2},~\eqref{eq3} and~\eqref{eq4} imply immediately that $\pi^-=1+\chi^-+\pi'$. This shows that \[\pi^++\pi^-=(1+\chi^++\pi')+(1+\chi^{-}+\pi')=1+1+\chi^++\chi^-+2\pi'=1+\pi_0=\pi,\] which completes the proof of Theorem~\ref{thrm}. \end{proof}
We note that the ``$q=2$'' case of Theorem~\ref{thrm} was first proved by Siemons and Zalesskii in~\cite[Proposition~$3.1$]{SZ}. This case is particularly easy to deal with (considering that the action of $G$ on both $\Omega^+$ and $\Omega^-$ is $2$-transitive) and its proof depends only on Frobenius reciprocity. However, the general statement (valid for every even $q$) of Theorem~\ref{thrm} was undoubtedly inspired by their observation.
Theorem~\ref{thrm} reproduces the following result as an immediate corollary (see \cite[Theorem 6]{D}). \begin{corollary}Every element of $\mathop{\mathrm{Sp}}_{2m}(q)$ is conjugate to an element of $\mathrm{O}_{2m}^+(q)$ or of $\mathrm{O}_{2m}^-(q)$. \end{corollary} \begin{proof} Let $g\in \mathop{\mathrm{Sp}}_{2m}(q)$. Then $\pi(g)=(1+\pi_0)(g)=1(g)+\pi_0(g)\geq 1(g)=1$ and therefore, since $\pi=\pi^{+} + \pi^{-}$ by Theorem~\ref{thrm}, either $\pi^+(g)\geq 1$ or $\pi^-(g)\geq 1$; that is, $g$ fixes some point in $\Omega^+$ or in $\Omega^-$. In the first case $g$ has a conjugate in $\mathrm{O}_{2m}^{+}(q)$ and in the second case $g$ has a conjugate in $\mathrm{O}_{2m}^-(q)$. \end{proof}
\thebibliography{10} \bibitem{C}P. J. Cameron, \textit{Permutation groups}, London Mathematical Society Student Texts, 45. Cambridge University Press, Cambridge, 1999.
\bibitem{CV}P. J. Cameron, J. H. van Lint, \textit{Designs, graphs, codes and their links}, London Mathematical Society Student Texts, 22. Cambridge University Press, Cambridge, 1991.
\bibitem{ATLAS}J.~H.~Conway, R.~T.~Curtis, S.~P.~Norton, R.~A.~Parker, R.~A.~Wilson, \textit{Atlas of finite groups}, Clarendon Press, Oxford, 1985.
\bibitem{D}R.~H.~Dye, Interrelations of symplectic and orthogonal groups in characteristic two, \textit{J. Algebra} \textbf{59} (1979), 202--221.
\bibitem{FK}P.~F\"{o}rster, L.~G.~Kov\'acs, A problem of Wielandt on finite permutation groups, \textit{J. London Math. Soc. (2)} \textbf{41} (1990), 231--243.
\bibitem{Gow}R.~Gow, Products of two involutions in classical groups of characteristic $2$, \textit{J. Algebra} \textbf{71} (1981), 583--591.
\bibitem{GS}R.~M.~Guralnick, J.~Saxl, Primitive permutation characters, \textit{London Math. Soc. Lecture Note Ser. } \textbf{165}, Cambridge Univ. Press, Cambridge, 1992, 364--376.
\bibitem{I}N.~F.~J. Inglis, The embedding $\mathrm{O}(2m,2^k)\leq \mathrm{Sp}(2m,2^k)$, \textit{Arch. Math.} \textbf{54} (1990), 327--330.
\bibitem{KL}P.~Kleidman, M.~Liebeck, \textit{The Subgroup Structure of the Finite Classical Groups}, London Math. Society Lecture Notes 129, Cambridge University Press, Cambridge, 1990.
\bibitem{L}M.~W.~Liebeck, Permutation modules for rank $3$ symplectic and orthogonal groups, \textit{J. Algebra} \textbf{92} (1985), 9--15.
\bibitem{LPS}M.~W.~Liebeck, C.~E.~Praeger, J.~Saxl, \textit{The maximal
factorizations of the finite simple groups and their automorphism
groups}, Memoirs of the American Mathematical Society, Volume
\textbf{86}, Nr \textbf{432}, Providence, Rhode Island, USA, 1990.
\bibitem{Serre}J-P.~Serre, \textit{Linear representations of finite groups}, Graduate Texts in Mathematics \textbf{42}, Springer-Verlag, New York, 1977
\bibitem{SZ}J.~Siemons, A.~Zalesskii, Regular orbits of cyclic subgroups in permutation representations of certain simple groups, \textit{J. Algebra} \textbf{256} (2002), 611--625.
\bibitem{P}P.~Spiga, Permutation characters and fixed-point-free elements in permutation groups, \textit{J. Algebra} \textbf{299} (2006), 1--7.
\bibitem{W}H.~Wielandt, Problem~$6.6$, The Kourovka Notebook, \textit{Amer. Math. Soc. Translations (2)} \textbf{121} (1983).
\end{document} | arXiv |
Simpson's paradox
Simpson's paradox is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science statistics,[1][2][3] and is particularly problematic when frequency data are unduly given causal interpretations.[4] The paradox can be resolved when confounding variables and causal relations are appropriately addressed in the statistical modeling[4][5] (e.g., through cluster analysis[6]).
Simpson's paradox has been used to illustrate the kind of misleading results that the misuse of statistics can generate.[7][8]
Edward H. Simpson first described this phenomenon in a technical paper in 1951,[9] but the statisticians Karl Pearson (in 1899[10]) and Udny Yule (in 1903[11]) had mentioned similar effects earlier. The name Simpson's paradox was introduced by Colin R. Blyth in 1972.[12] It is also referred to as Simpson's reversal, the Yule–Simpson effect, the amalgamation paradox, or the reversal paradox.[13]
Mathematician Jordan Ellenberg argues that Simpson's paradox is misnamed as "there's no contradiction involved, just two different ways to think about the same data" and suggests that its lesson "isn't really to tell us which viewpoint to take but to insist that we keep both the parts and the whole in mind at once."[14]
Examples
UC Berkeley gender bias
One of the best-known examples of Simpson's paradox comes from a study of gender bias among graduate school admissions to University of California, Berkeley. The admission figures for the fall of 1973 showed that men applying were more likely than women to be admitted, and the difference was so large that it was unlikely to be due to chance.[15][16]
All Men Women
Applicants Admitted Applicants Admitted Applicants Admitted
Total 12,763 41% 8,442 44% 4,321 35%
However, when taking into account the information about departments being applied to, the different rejection percentages reveal the different difficulty of getting into the department, and at the same time it showed that women tended to apply to more competitive departments with lower rates of admission, even among qualified applicants (such as in the English department), whereas men tended to apply to less competitive departments with higher rates of admission (such as in the engineering department). The pooled and corrected data showed a "small but statistically significant bias in favor of women".[16]
The data from the six largest departments are listed below:
Department All Men Women
Applicants Admitted Applicants Admitted Applicants Admitted
A 933 64% 825 62% 108 82%
B 585 63% 560 63% 25 68%
C 918 35% 325 37% 593 34%
D 792 34% 417 33% 375 35%
E 584 25% 191 28% 393 24%
F 714 6% 373 6% 341 7%
Total 4526 39% 2691 45% 1835 30%
Legend:
greater percentage of successful applicants than the other gender
greater number of applicants than the other gender
bold - the two 'most applied for' departments for each gender
The entire data showed total of 4 out of 85 departments to be significantly biased against women, while 6 to be significantly biased against men (not all present in the 'six largest departments' table above). Notably, the numbers of biased departments were not the basis for the conclusion, but rather it was the gender admissions pooled across all departments, while weighing by each department's rejection rate across all of its applicants.[16]
Kidney stone treatment
Another example comes from a real-life medical study[17] comparing the success rates of two treatments for kidney stones.[18] The table below shows the success rates (the term success rate here actually means the success proportion) and numbers of treatments for treatments involving both small and large kidney stones, where Treatment A includes open surgical procedures and Treatment B includes closed surgical procedures. The numbers in parentheses indicate the number of success cases over the total size of the group.
Treatment
Stone size
Treatment A Treatment B
Small stones Group 1
93% (81/87)
Group 2
87% (234/270)
Large stones Group 3
73% (192/263)
Group 4
69% (55/80)
Both 78% (273/350)83% (289/350)
The paradoxical conclusion is that treatment A is more effective when used on small stones, and also when used on large stones, yet treatment B appears to be more effective when considering both sizes at the same time. In this example, the "lurking" variable (or confounding variable) causing the paradox is the size of the stones, which was not previously known to researchers to be important until its effects were included.
Which treatment is considered better is determined by which success ratio (successes/total) is larger. The reversal of the inequality between the two ratios when considering the combined data, which creates Simpson's paradox, happens because two effects occur together:
1. The sizes of the groups, which are combined when the lurking variable is ignored, are very different. Doctors tend to give cases with large stones the better treatment A, and the cases with small stones the inferior treatment B. Therefore, the totals are dominated by groups 3 and 2, and not by the two much smaller groups 1 and 4.
2. The lurking variable, stone size, has a large effect on the ratios; i.e., the success rate is more strongly influenced by the severity of the case than by the choice of treatment. Therefore, the group of patients with large stones using treatment A (group 3) does worse than the group with small stones, even if the latter used the inferior treatment B (group 2).
Based on these effects, the paradoxical result is seen to arise because the effect of the size of the stones overwhelms the benefits of the better treatment (A). In short, the less effective treatment B appeared to be more effective because it was applied more frequently to the small stones cases, which were easier to treat.[18]
Batting averages
A common example of Simpson's paradox involves the batting averages of players in professional baseball. It is possible for one player to have a higher batting average than another player each year for a number of years, but to have a lower batting average across all of those years. This phenomenon can occur when there are large differences in the number of at bats between the years. Mathematician Ken Ross demonstrated this using the batting average of two baseball players, Derek Jeter and David Justice, during the years 1995 and 1996:[19][20]
Year
Batter
1995 1996 Combined
Derek Jeter 12/48 .250 183/582 .314 195/630 .310
David Justice 104/411 .253 45/140 .321 149/551 .270
In both 1995 and 1996, Justice had a higher batting average (in bold type) than Jeter did. However, when the two baseball seasons are combined, Jeter shows a higher batting average than Justice. According to Ross, this phenomenon would be observed about once per year among the possible pairs of players.[19]
Vector interpretation
Simpson's paradox can also be illustrated using a 2-dimensional vector space.[21] A success rate of $ {\frac {p}{q}}$ (i.e., successes/attempts) can be represented by a vector ${\vec {A}}=(q,p)$, with a slope of $ {\frac {p}{q}}$. A steeper vector then represents a greater success rate. If two rates $ {\frac {p_{1}}{q_{1}}}$ and $ {\frac {p_{2}}{q_{2}}}$ are combined, as in the examples given above, the result can be represented by the sum of the vectors $(q_{1},p_{1})$ and $(q_{2},p_{2})$, which according to the parallelogram rule is the vector $(q_{1}+q_{2},p_{1}+p_{2})$, with slope $ {\frac {p_{1}+p_{2}}{q_{1}+q_{2}}}$.
Simpson's paradox says that even if a vector ${\vec {L}}_{1}$ (in orange in figure) has a smaller slope than another vector ${\vec {B}}_{1}$ (in blue), and ${\vec {L}}_{2}$ has a smaller slope than ${\vec {B}}_{2}$, the sum of the two vectors ${\vec {L}}_{1}+{\vec {L}}_{2}$ can potentially still have a larger slope than the sum of the two vectors ${\vec {B}}_{1}+{\vec {B}}_{2}$, as shown in the example. For this to occur one of the orange vectors must have a greater slope than one of the blue vectors (here ${\vec {L}}_{2}$ and ${\vec {B}}_{1}$), and these will generally be longer than the alternatively subscripted vectors – thereby dominating the overall comparison.
Correlation between variables
Simpson's reversal can also arise in correlations, in which two variables appear to have (say) a positive correlation towards one another, when in fact they have a negative correlation, the reversal having been brought about by a "lurking" confounder. Berman et al.[22] give an example from economics, where a dataset suggests overall demand is positively correlated with price (that is, higher prices lead to more demand), in contradiction of expectation. Analysis reveals time to be the confounding variable: plotting both price and demand against time reveals the expected negative correlation over various periods, which then reverses to become positive if the influence of time is ignored by simply plotting demand against price.
Psychology
Psychological interest in Simpson's paradox seeks to explain why people deem sign reversal to be impossible at first, offended by the idea that an action preferred both under one condition and under its negation should be rejected when the condition is unknown. The question is where people get this strong intuition from, and how it is encoded in the mind.
Simpson's paradox demonstrates that this intuition cannot be derived from either classical logic or probability calculus alone, and thus led philosophers to speculate that it is supported by an innate causal logic that guides people in reasoning about actions and their consequences.[4] Savage's sure-thing principle[12] is an example of what such logic may entail. A qualified version of Savage's sure thing principle can indeed be derived from Pearl's do-calculus[4] and reads: "An action A that increases the probability of an event B in each subpopulation Ci of C must also increase the probability of B in the population as a whole, provided that the action does not change the distribution of the subpopulations." This suggests that knowledge about actions and consequences is stored in a form resembling Causal Bayesian Networks.
Probability
A paper by Pavlides and Perlman presents a proof, due to Hadjicostas, that in a random 2 × 2 × 2 table with uniform distribution, Simpson's paradox will occur with a probability of exactly 1⁄60.[23] A study by Kock suggests that the probability that Simpson's paradox would occur at random in path models (i.e., models generated by path analysis) with two predictors and one criterion variable is approximately 12.8 percent; slightly higher than 1 occurrence per 8 path models.[24]
Simpson's second paradox
A second, less well-known paradox was also discussed in Simpson's 1951 paper. It can occur when the "sensible interpretation" is not necessarily found in the separated data, like in the Kidney Stone example, but can instead reside in the combined data. Whether the partitioned or combined form of the data should be used hinges on the process giving rise to the data, meaning the correct interpretation of the data cannot always be determined by simply observing the tables.[25]
Judea Pearl has shown that, in order for the partitioned data to represent the correct causal relationships between any two variables, $X$ and $Y$, the partitioning variables must satisfy a graphical condition called "back-door criterion":[26][27]
1. They must block all spurious paths between $X$ and $Y$
2. No variable can be affected by $X$
This criterion provides an algorithmic solution to Simpson's second paradox, and explains why the correct interpretation cannot be determined by data alone; two different graphs, both compatible with the data, may dictate two different back-door criteria.
When the back-door criterion is satisfied by a set Z of covariates, the adjustment formula (see Confounding) gives the correct causal effect of X on Y. If no such set exists, Pearl's do-calculus can be invoked to discover other ways of estimating the causal effect.[4][28] The completeness of do-calculus [29][28] can be viewed as offering a complete resolution of the Simpson's paradox.
Criticism
One criticism is that the paradox is not really a paradox at all, but rather a failure to properly account for confounding variables or to consider causal relationships between variables.[30]
Another criticism of the apparent Simpson's paradox is that it may be a result of the specific way that data is stratified or grouped. The phenomenon may disappear or even reverse if the data is stratified differently or if different confounding variables are considered. Simpson's example actually highlighted a phenomenon called noncollapsibility,[31] which occurs when subgroups with high proportions do not make simple averages when combined. This suggests that the paradox may not be a universal phenomenon, but rather a specific instance of a more general statistical issue.
Critics of the apparent Simpson's paradox also argue that the focus on the paradox may distract from more important statistical issues, such as the need for careful consideration of confounding variables and causal relationships when interpreting data.[32]
Despite these criticisms, the apparent Simpson's paradox remains a popular and intriguing topic in statistics and data analysis. It continues to be studied and debated by researchers and practitioners in a wide range of fields, and it serves as a valuable reminder of the importance of careful statistical analysis and the potential pitfalls of simplistic interpretations of data.
See also
• Aliasing – Signal processing effect
• Anscombe's quartet – Four data sets with the same descriptive statistics, yet very different distributions
• Berkson's paradox – Tendency to misinterpret statistical experiments involving conditional probabilities
• Cherry picking – Fallacy of incomplete evidence
• Condorcet paradox – Situation in social choice theory where collective preferences are cyclic
• Ecological fallacy – Logical fallacy that occurs when group characteristics are applied to individuals
• Gerrymandering – Form of political manipulation
• Low birth-weight paradox – Statistical quirk of babies' birth weights
• Modifiable areal unit problem – Source of statistical bias
• Prosecutor's fallacy – Error in thinking which involves under-valuing base rate informationPages displaying short descriptions of redirect targets
• Will Rogers phenomenon – phenomenon in which moving an element from one set to another set raises the average values of both setsPages displaying wikidata descriptions as a fallback
• Spurious correlation
• Omitted-variable bias
References
1. Clifford H. Wagner (February 1982). "Simpson's Paradox in Real Life". The American Statistician. 36 (1): 46–48. doi:10.2307/2684093. JSTOR 2684093.
2. Holt, G. B. (2016). Potential Simpson's paradox in multicenter study of intraperitoneal chemotherapy for ovarian cancer. Journal of Clinical Oncology, 34(9), 1016–1016.
3. Franks, Alexander; Airoldi, Edoardo; Slavov, Nikolai (2017). "Post-transcriptional regulation across human tissues". PLOS Computational Biology. 13 (5): e1005535. arXiv:1506.00219. Bibcode:2017PLSCB..13E5535F. doi:10.1371/journal.pcbi.1005535. ISSN 1553-7358. PMC 5440056. PMID 28481885.
4. Judea Pearl. Causality: Models, Reasoning, and Inference, Cambridge University Press (2000, 2nd edition 2009). ISBN 0-521-77362-8.
5. Kock, N., & Gaskins, L. (2016). Simpson's paradox, moderation and the emergence of quadratic relationships in path models: An information systems illustration. International Journal of Applied Nonlinear Science, 2(3), 200–234.
6. Rogier A. Kievit, Willem E. Frankenhuis, Lourens J. Waldorp and Denny Borsboom, Simpson's paradox in psychological science: a practical guide https://doi.org/10.3389/fpsyg.2013.00513
7. Robert L. Wardrop (February 1995). "Simpson's Paradox and the Hot Hand in Basketball". The American Statistician, 49 (1): pp. 24–28.
8. Alan Agresti (2002). "Categorical Data Analysis" (Second edition). John Wiley and Sons ISBN 0-471-36093-7
9. Simpson, Edward H. (1951). "The Interpretation of Interaction in Contingency Tables". Journal of the Royal Statistical Society, Series B. 13: 238–241.
10. Pearson, Karl; Lee, Alice; Bramley-Moore, Lesley (1899). "Genetic (reproductive) selection: Inheritance of fertility in man, and of fecundity in thoroughbred racehorses". Philosophical Transactions of the Royal Society A. 192: 257–330. doi:10.1098/rsta.1899.0006.
11. G. U. Yule (1903). "Notes on the Theory of Association of Attributes in Statistics". Biometrika. 2 (2): 121–134. doi:10.1093/biomet/2.2.121.
12. Colin R. Blyth (June 1972). "On Simpson's Paradox and the Sure-Thing Principle". Journal of the American Statistical Association. 67 (338): 364–366. doi:10.2307/2284382. JSTOR 2284382.
13. I. J. Good, Y. Mittal (June 1987). "The Amalgamation and Geometry of Two-by-Two Contingency Tables". The Annals of Statistics. 15 (2): 694–711. doi:10.1214/aos/1176350369. ISSN 0090-5364. JSTOR 2241334.
14. Ellenberg, Jordan (May 25, 2021). Shape: The Hidden Geometry of Information, Biology, Strategy, Democracy and Everything Else. New York: Penguin Press. p. 228. ISBN 978-1-9848-7905-9. OCLC 1226171979.
15. David Freedman, Robert Pisani, and Roger Purves (2007), Statistics (4th edition), W. W. Norton. ISBN 0-393-92972-8.
16. P.J. Bickel, E.A. Hammel and J.W. O'Connell (1975). "Sex Bias in Graduate Admissions: Data From Berkeley" (PDF). Science. 187 (4175): 398–404. Bibcode:1975Sci...187..398B. doi:10.1126/science.187.4175.398. PMID 17835295. S2CID 15278703. Archived (PDF) from the original on 2016-06-04.
17. C. R. Charig; D. R. Webb; S. R. Payne; J. E. Wickham (29 March 1986). "Comparison of treatment of renal calculi by open surgery, percutaneous nephrolithotomy, and extracorporeal shockwave lithotripsy". Br Med J (Clin Res Ed). 292 (6524): 879–882. doi:10.1136/bmj.292.6524.879. PMC 1339981. PMID 3083922.
18. Steven A. Julious; Mark A. Mullee (3 December 1994). "Confounding and Simpson's paradox". BMJ. 309 (6967): 1480–1481. doi:10.1136/bmj.309.6967.1480. PMC 2541623. PMID 7804052.
19. Ken Ross. "A Mathematician at the Ballpark: Odds and Probabilities for Baseball Fans (Paperback)" Pi Press, 2004. ISBN 0-13-147990-3. 12–13
20. Statistics available from Baseball-Reference.com: Data for Derek Jeter; Data for David Justice.
21. Kocik Jerzy (2001). "Proofs without Words: Simpson's Paradox" (PDF). Mathematics Magazine. 74 (5): 399. doi:10.2307/2691038. JSTOR 2691038. Archived (PDF) from the original on 2010-06-12.
22. Berman, S. DalleMule, L. Greene, M., Lucker, J. (2012), "Simpson's Paradox: A Cautionary Tale in Advanced Analytics Archived 2020-05-10 at the Wayback Machine", Significance.
23. Marios G. Pavlides & Michael D. Perlman (August 2009). "How Likely is Simpson's Paradox?". The American Statistician. 63 (3): 226–233. doi:10.1198/tast.2009.09007. S2CID 17481510.
24. Kock, N. (2015). How likely is Simpson's paradox in path models? International Journal of e-Collaboration, 11(1), 1–7.
25. Norton, H. James; Divine, George (August 2015). "Simpson's paradox ... and how to avoid it". Significance. 12 (4): 40–43. doi:10.1111/j.1740-9713.2015.00844.x.
26. Pearl, Judea (2014). "Understanding Simpson's Paradox". The American Statistician. 68 (1): 8–13. doi:10.2139/ssrn.2343788. S2CID 2626833.
27. Pearl, Judea (1993). "Graphical Models, Causality, and Intervention". Statistical Science. 8 (3): 266–269. doi:10.1214/ss/1177010894.
28. Pearl, J.; Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. New York, NY: Basic Books.
29. Shpitser, I.; Pearl, J. (2006). Dechter, R.; Richardson, T.S. (eds.). "Identification of Conditional Interventional Distributions". Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence. Corvallis, OR: AUAI Press: 437–444.
30. Blyth, Colin R. (June 1972). "On Simpson's Paradox and the Sure-Thing Principle". Journal of the American Statistical Association. 67 (338): 364–366. doi:10.1080/01621459.1972.10482387. ISSN 0162-1459.
31. Greenland, Sander (2021-11-01). "Noncollapsibility, confounding, and sparse-data bias. Part 2: What should researchers make of persistent controversies about the odds ratio?". Journal of Clinical Epidemiology. 139: 264–268. doi:10.1016/j.jclinepi.2021.06.004. ISSN 0895-4356. PMID 34119647.
32. Hernán, Miguel A.; Clayton, David; Keiding, Niels (June 2011). "The Simpson's paradox unraveled". International Journal of Epidemiology. 40 (3): 780–785. doi:10.1093/ije/dyr041. ISSN 1464-3685. PMC 3147074. PMID 21454324.
Bibliography
• Leila Schneps and Coralie Colmez, Math on trial. How numbers get used and abused in the courtroom, Basic Books, 2013. ISBN 978-0-465-03292-1. (Sixth chapter: "Math error number 6: Simpson's paradox. The Berkeley sex bias case: discrimination detection").
External links
Wikimedia Commons has media related to Simpson's paradox.
• Simpson's Paradox at the Stanford Encyclopedia of Philosophy, by Jan Sprenger and Naftali Weinberger.
• How statistics can be misleading – Mark Liddell – TED-Ed video and lesson.
• Pearl, Judea, "Understanding Simpson’s Paradox" (PDF)
• Simpson's Paradox, a short article by Alexander Bogomolny on the vector interpretation of Simpson's paradox
• The Wall Street Journal column "The Numbers Guy" for December 2, 2009 dealt with recent instances of Simpson's paradox in the news. Notably a Simpson's paradox in the comparison of unemployment rates of the 2009 recession with the 1983 recession.
• At the Plate, a Statistical Puzzler: Understanding Simpson's Paradox by Arthur Smith, August 20, 2010
• Simpson's Paradox, a video by Henry Reich of MinutePhysics
| Wikipedia |
Worldbuilding Stack Exchange is a question and answer site for writers/artists using science, geography and culture to construct imaginary worlds and settings. It only takes a minute to sign up.
Making doomsayers right - a moon(s), planet alignment that matters
This question asks for hard science. All answers to this question should be backed up by equations, empirical evidence, scientific papers, other citations, etc. Answers that do not satisfy this requirement might be removed. See the tag description for more information.
Considering our topic challenge, and the fantastic eclipse last Sunday a question came to me.
Could there be a stable (relatively speaking) planetary system where a(n) eclipse/alignment would actually make a noticeable difference on an earth like planet?
The eclipse/alignment should cause one or more of the following:
Large, powerful waves that can severely damage or flood coastal areas.
Earthquakes/tremors
Powerful storm systems
Other (include in your answer)
The planet:
should be as Earth-like as possible
must have at least one moon (it may have more)
The questions:
What would the planet, moon, and star sizes be?
What would the distances between them be? (Meaning the planet and moon (or moons)
Would eclipses occur on a regular or irregular basis?
hard-science moons natural-disasters
$\begingroup$ Pern has serious problems relating to a planet flying too close every so often. pern.wikia.com/wiki/Thread Not #hard-science though. XD $\endgroup$ – Jerenda Feb 12 '16 at 15:20
$\begingroup$ You could have a Janus/Epimetheus orbit, where the two planets pass close enough to each other to cause earthquakes, floods, and volcanic activity. One apocalypse a year! $\endgroup$ – Xandar The Zenon Feb 12 '16 at 15:58
$\begingroup$ It would actually be more like two apocalypses, on on each planet. Years could also go by really slowly, so you have more of a gap in between apocalypses. $\endgroup$ – Xandar The Zenon Feb 12 '16 at 16:06
$\begingroup$ I get asking for a particular positioning of the planets but an eclipse in itself its no reason for tidal waves, earthquakes etc, its one body obscuring another. $\endgroup$ – Erik vanDoren Feb 12 '16 at 18:42
$\begingroup$ @ErikvanDoren true but during an eclipse they are in alignment, so this is more relevant for solar eclipses, as for exerted on earth would all be in one direction. $\endgroup$ – James Feb 12 '16 at 19:03
I'm ~99% certain that the effects of a second celestial body on seismic activity on an Earth-like planet has been covered before (in that case, by a second Earth-like planet); if anyone can point me to it, that would be great. The conclusion - if I remember correctly, and I think I do - was that there wouldn't be any major effects in this area. I might have supported that conclusion, in which case I may have been wrong.
Scientific American has an interesting article on the subject. It turns out that a causal relationship between the moon and seismic activity was first postulated a long time ago. Scientific American itself published a minor story on the idea in 1855, based on the work of one Alexis Perrey. Apparently, Perrey showed three correlated relationships:
The frequency of earthquakes/tremors is increased during a syzygy - a time when the Earth, the Moon and the Sun are in a straight line.
The frequency increases during the Moon's closest approach (perigee), and decreases during the Moon's furthest approach (apogee).
The frequency increases when "the moon is near the meridian, than when 60° from it." I'm not entirely sure what Perrey means here, so I won't attempt an interpretation.
Perrey's work comes from "7,000 observations", which seems convincing, but it is entirely based on observations, it seems - there is no explicit theory as to why this is the case. I'm not saying that should remove credence from it, but note that no causal relationship was proven.
More recently, Straser (2010) and Vergos et al. (2015) (paywalled version; a difference version is available via ResearchGate)) investigated the problem. The former also summarized previous work on the problem, which had attempted to show a number of relationships between earthquakes and the Moon. Here are some of those works:
Omori (1908): The rhythms of the tides can cause a rise in earthquake frequency.
Bagby (1973): Syzygies increase earthquake frequency (this is the same as one of Perrey's conclusions).
Kokus (2006): Changes in the Moon's motion can influence fault behavior.
Kolvankar et al. (2010): Earthquake frequencies change according to the lunar cycle.
Zhao (2008): The Earth can induce earthquakes on the Moon - "moonquakes".
The main point here is that tidal forces can apparently influence earthquake frequency. However, the author's conclusion was that - especially as regards his own research - links can be tenuous at times.
Vergos et al. studied an earthquake and related tremors in Greece, and established a relation between the phase angle of an earthquake ($\phi_i$) and the period of a relative tidal component ($T_d$): $$\phi_i=\left(\left[\frac{t_i-t_0}{T_d}\right]-\text{int}\left[\frac{t_i-t_0}{T_d}\right]\right)$$ Can we establish a causal relationship from all this data? Not necessarily. We have no theoretical model to explain it, either. The USGS has written some of the resultant phenomena off as coincidences (see this article). I think, however, that the evidence is compelling enough to show that some relationship might exist.
In your case, we can take advantage of syzgies. The more bodies - in this case, more moons - the greater the effects, in theory. The differential force experienced by Earth is proportional to $r^{-3}$, however, not $r^{-2}$ (see here; keep this in mind for calculations).
To answer your questions about mass and distance, I say only that it is up to you. We don't know enough to come up with accurate formulae for the effects - if they exist - so we can't know for sure what conditions are necessary to cause a given result. I can tell you that the alignment - for it is an alignment that you need, not an eclipse - would be periodic, because orbits (and therefore orbital alignments) are periodic.
I wrote more about stability in my answer here to your related question.
HDE 226868♦HDE 226868
Another approach:
It's not a moon that's causing the eclipse. Rather, it's a large planet that occasionally passes very close to the world in question. There will be stability issues here but so far they have been countered by the fact that the worlds are in resonance. The perihelion for the world getting beat up (the other world suffers also but figure it's a gas giant) has been very slowly decaying due to these encounters, as it decays the encounters get closer and closer (and thus more damaging) until eventually you either get a major orbital disruption or else it's destruction.
Loren PechtelLoren Pechtel
$\begingroup$ This doesn't meet the focus of the question, nor does it satisfy the requirements of the hard-science tag. $\endgroup$ – HDE 226868♦ Sep 30 '15 at 0:14
$\begingroup$ @HDE226868 He wants something astrological that actually matters--this would. It meets that part of it. And why can't you have a orbit that causes a close encounter? If it wasn't a resonance orbit it would certainly result in the planet being flung away (and will in time anyway) but I doubt he needs something that lasts more than historical time. $\endgroup$ – Loren Pechtel Sep 30 '15 at 1:23
$\begingroup$ Sure, it meets that requirement, but it doesn't back itself up like it should. $\endgroup$ – HDE 226868♦ Sep 30 '15 at 1:27
Rather than using the more conventional approach of abusing gravitational effects to produce our doomsday, this answer relies on solar radiation and optics. There is one caveat - it requires a very weird moon which, while scientifically possible, must be an artificially created structure.
Make the moon a spherical lens. During an 'eclipse', the moon will focus the sun's rays to a single point on the earth's surface. This will cause rapid concentrated heating, leading to drastic weather changes (as well as melting any location unfortunate enough to fall under the focal point).
This will only occur during a perfectly aligned total lunar eclipse; an imperfect alignment will cause the focal beam to miss the earth.
While this is perhaps somewhat outside of the intended scope of the question, it does fit within the spirit of the question - a celestial alignment causing doomsday-like effects.
Other than the composition of the moon, the solar system is similar to ours for the purposes of interplanetary distances and eclipse frequency.
The Lens:
The effective focal length of a lens is:
$$EFL = \frac{nD}{4(n-1)}$$
Then, let: $$D = Distance\;from\;earth\;to\;moon = 384,400\,km$$ $$EFL=Diameter\;of\;moon = 3,474\,km$$
Putting this into the equation, this gives an index of refraction of approximately n = 1.0023. We can achieve something close to this by using benzene gas as our refractive material (n = 1.0018). To get a bit closer to this value of n, we can increase the distance to 49,3774 km or decrease the diameter to 2704 km.
This leaves us with a moon comprised of a solid transparent shell filled with benzene gas or similar for our lens.
Note that this means the moon will be much lighter than our moon, so the planet would not likely experience any tides.
During an 'eclipse', the moon lens will focus (most of) the sunlight passing through it to a small point on the earth's surface.
At earth's orbit, the power density of sunlight is approximately 1.36 kW/m2 (source).
Given a diameter of 3,474 km (r = 1.737 x 106 m), the cross-sectional area of the moon will be:
$$\pi r ^2 = 9.4787 \times 10^{12}\,m^2 $$
This means that we will have 1.289x1016 Watts passing through the lens.
If we assume a totality/alignment of about 100 minutes (6000 s) (source), this gives a total energy output of about 7.734x1019 Joules over the course of the eclipse.
While the location of the focal point will (rapidly) move across earth's surface during the eclipse, this is still enough energy to cause plenty of damage. For instance, if the focal point spent most of its time over ocean, it would boil away somewhere around 3 x 1016 g of water. Given that a typical hurricane produces about 2.1 x 1016 g of rain (source), this should be sufficient to produce some spectacular storms.
For some more context on how much energy we are seeing at the focal point, I took a look at https://en.wikipedia.org/wiki/Orders_of_magnitude_(energy).
Notably, the focal point outputs an equivalent amount of energy to the Hiroshima bomb every microsecond.
Runic-ScribeRunic-Scribe
It is possible.
The planet should have more than one moon, revolving in the same plane, at different distances from the planet. The moons should be as large and close to the planet as practically possible, without messing things up.
The star should be as heavy as practically possible without messing things up.
The oceans should be very deep (~5 km average depth as compared to ~3 km on earth).
When/if all the moons get in-line with the star, this compound eclipse would have horrible consequences. We are talking tsunamis (tidal effect), raging storms (tidal effect on the atmosphere) and earthquakes (tidal effects on the crust) here.
Youstay IgoYoustay Igo
20k4040 silver badges8181 bronze badges
$\begingroup$ Hard Science!!! $\endgroup$ – James Sep 29 '15 at 19:59
$\begingroup$ It is. There are planets with more than one moon. And the moons can line up (even if they exist in different interfering orbital planes). $\endgroup$ – Youstay Igo Sep 29 '15 at 20:03
$\begingroup$ @YoustayIgo James is referencing the hard-science tag he has applied to the question. It requires much more rigorous detail in an answer than you have here so far. If this is actually a solution to the question there should be math and science (at the very least some citations) to back it up. $\endgroup$ – Avernium Sep 29 '15 at 21:08
$\begingroup$ @Avernium thanks for clearing up what I meant. Youstay, apologies for not elaborating, but yes the hard science tag comes with certain expectations of citation and/or calculation. $\endgroup$ – James Sep 30 '15 at 13:51
$\begingroup$ Could have been done when I was writing that answer, but too much work and hassle now. Can't be bothered to go on quoting planetary system and statistical analysis now. $\endgroup$ – Youstay Igo Sep 30 '15 at 13:53
Thanks for contributing an answer to Worldbuilding Stack Exchange!
Not the answer you're looking for? Browse other questions tagged hard-science moons natural-disasters or ask your own question.
Can you add a mini moon to Earth?
What distances would be involved in this planetary system?
Binary planet eclipses
What would a planet with no moon look like?
Reality Check: Habitable moon around earth-like planet
Timekeeping Systems on a Habitable Moon
Where can I find formulas or calculators for apparent moon sizes?
Effects of regular prolonged eclipses on environment
What would the Total Eclipse look like on a planet with rings?
How can I discourage parents from exploiting celestial events?
What would happen to a world with three moons?
How would eclipses change (both solar and lunar) if the moon was half the distance from the Earth? | CommonCrawl |
Existence theorems for generalized nonlinear quadratic integral equations via a new fixed point result
Global-in-time Gevrey regularity solutions for the functionalized Cahn-Hilliard equation
Kelong Cheng 1, , Cheng Wang 2, , Steven M. Wise 3,, and Zixia Yuan 4,
School of Science, Southwest University of Science and Technology, Mianyang, Sichuan 621010, China
Mathematics Department, The University of Massachusetts, North Dartmouth, MA 02747, USA
Mathematics Department, The University of Tennessee, Knoxville, TN 37996, USA
School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
* Corresponding author: [email protected]
Received April 2018 Published November 2019
Fund Project: C. Wang was supported by NSF grant DMS-1418689. S.M. Wise was supported by NSF grants DMS-1418692 and DMS-1719854
The existence and uniqueness of Gevrey regularity solutions for the functionalized Cahn-Hilliard (FCH) and Cahn-Hilliard-Willmore (CHW) equations are established. The energy dissipation law yields a uniform-in-time $ H^2 $ bound of the solution, and the polynomial patterns of the nonlinear terms enable one to derive a local-in-time solution with Gevrey regularity. A careful calculation reveals that the existence time interval length depends on the $ H^3 $ norm of the initial data. A further detailed estimate for the original PDE system indicates a uniform-in-time $ H^3 $ bound. Consequently, a global-in-time solution becomes available with Gevrey regularity.
Keywords: Functionalized Cahn-Hilliard equation, Gevrey regularity solution, global-in-time existence.
Mathematics Subject Classification: Primary: 35K35, 35K55.
Citation: Kelong Cheng, Cheng Wang, Steven M. Wise, Zixia Yuan. Global-in-time Gevrey regularity solutions for the functionalized Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020186
[1] R. A. Adams, Sobolev Spaces, Pure and Applied Mathematics, 65, Academic Press, New York-London, 1975. Google Scholar
S. M. Allen and J. W. Cahn, A microscopic theory for antiphase boundary motion and its application to antiphase domain coursening, Acta. Metall., 27 (1979), 1085-1095. doi: 10.1016/0001-6160(79)90196-2. Google Scholar
A. Biswas and D. Swanson, Existence and generalized Gevrey regularity of solutions to the Kuramoto-Sivashinsky equation in $R^n$, J. Differential Equations, 240 (2007), 145-163. doi: 10.1016/j.jde.2007.05.022. Google Scholar
A. Biswas and D. Swanson, Gevrey regularity of solutions to the 3-D Navier-Stokes equations with weighted $\ell_p$ initial data, Indiana Univ. Math. J., 56 (2007), 1157-1188. doi: 10.1512/iumj.2007.56.2891. Google Scholar
Z. Bradshaw, Z. Grujic and I. Kukavica, Local analyticity radii of solutions to the 3D Navier-Stokes equations with locally analytic forcing, J. Differential Equations, 259 (2015), 3955-3975. doi: 10.1016/j.jde.2015.05.009. Google Scholar
J. Cahn, On spinodal decomposition, Acta Metall., 9 (1961), 795-801. doi: 10.1016/0001-6160(61)90182-1. Google Scholar
J. Cahn and J. Hilliard, Free energy of a nonuniform system. Ⅰ: Interfacial free energy, J. Chem. Phys., 28 (1958). doi: 10.1063/1.1744102. Google Scholar
C. Cao, M. Rammaha and E. Titi, Gevrey regularity for nonlinear analytic parabolic equations on the sphere, J. Dynam. Differential Equations, 12 (2000), 411-433. doi: 10.1023/A:1009072526324. Google Scholar
F. Chen and J. Shen, Efficient spectral-Galerkin methods for systems of coupled second-order equations and their applications, J. Comput. Phys., 231 (2012), 5016-5028. doi: 10.1016/j.jcp.2012.03.001. Google Scholar
N. Chen, C. Wang and S. Wise, Global-in-time Gevrey regularity solution for a class of bistable gradient flows, Discrete Contin. Dyn. Syst. Ser. B, 21 (2016), 1689-1711. doi: 10.3934/dcdsb.2016018. Google Scholar
Y. Chen, J. Lowengrub, J. Shen, C. Wang and S. Wise, Efficient energy stable schemes for isotropic and strongly anisotropic Cahn-Hilliard systems with the Willmore regularization, J. Comput. Phys., 365 (2018), 56-73. doi: 10.1016/j.jcp.2018.03.024. Google Scholar
A. Christlieb, J. Jones, K. Promislow, B. Wetton and M. Willoughby, High accuracy solutions to energy gradient flows from material science models, J. Comput. Phys., 257 (2014), 193-215. doi: 10.1016/j.jcp.2013.09.049. Google Scholar
S. Dai and K. Promislow, Geometric evolution of bilayers under the functionalized Cahn–Hilliard equation, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 469 (2013), 20pp. doi: 10.1098/rspa.2012.0505. Google Scholar
A. Doelman, G. Hayrapetyan, K. Promislow and B. Wetton, Meander and pearling of single-curvature bilayer interfaces in the functionalized Cahn-Hilliard equation, SIAM J. Math. Anal., 46 (2014), 3640-3677. doi: 10.1137/13092705X. Google Scholar
A. Eden and V. Kalantarov, The convective Cahn-Hilliard equation, Appl. Math. Lett., 20 (2007), 455-461. doi: 10.1016/j.aml.2006.05.014. Google Scholar
W. Feng, Z. Guan, J. Lowengrub, C. Wang, S. Wise and Y. Chen, A uniquely solvable, energy stable numerical scheme for the functionalized Cahn-Hilliard equation and its convergence analysis, J. Sci. Comput., 76 (2018), 1938-1967. doi: 10.1007/s10915-018-0690-1. Google Scholar
W. Feng, Z. Guo, J. Lowengrub and S. Wise, A mass-conservative adaptive FAS multigrid solver for cell-centered finite difference methods on block-structured, locally-cartesian grids, J. Comput. Phys., 352 (2018), 463-497. doi: 10.1016/j.jcp.2017.09.065. Google Scholar
A. Ferrari and E. Titi, Gevrey regularity for nonlinear analytic parabolic equations, Comm. Partial Differential Equations, 23 (1998), 1-16. doi: 10.1080/03605309808821336. Google Scholar
C. Foias and R. Temam, Gevrey class regularity for the solution of the Navier-Stokes equations, J. Funct. Anal., 87 (1989), 359-369. doi: 10.1016/0022-1236(89)90015-3. Google Scholar
N. Gavish, G. Hayrapetyan, K. Promislow and L. Yang, Curvature driven flow of bi-layer interfaces, Physica D: Nonlinear Phenomena, 240 (2011), 675-693. doi: 10.1016/j.physd.2010.11.016. Google Scholar
N. Gavish, J. Jones, Z. Xu, A. Christlieb and K. Promislow, Variational models of network formation and ion transport: Applications to perfluorosulfonate ionomer membranes, Polymers, 4 (2012), 630-655. doi: 10.3390/polym4010630. Google Scholar
G. Gompper and M. Schick, Correlation between structural and interfacial properties of amphiphilic systems, Phys. Rev. Lett., 65 (1990), 1116-1119. doi: 10.1103/PhysRevLett.65.1116. Google Scholar
Z. Grujic and I. Kukavica, Space analyticity for the Navier-Stokes and related equations with initial data in $L^p$, J. Funct. Anal., 152 (1998), 447-466. doi: 10.1006/jfan.1997.3167. Google Scholar
R. Guo, Y. Xu and Z. Xu, Local discontinuous Galerkin methods for the functionalized Cahn-Hilliard equation, J. Sci. Comput., 63 (2015), 913-937. doi: 10.1007/s10915-014-9920-3. Google Scholar
W. Hsu and T. Gierke, Ion transport and clustering in nafion perfluorinated membranes, J. Membr. Sci., 13 (1983), 307-326. doi: 10.1016/S0376-7388(00)81563-X. Google Scholar
V. Kalantarov, B. Levant and E. Titi, Gevrey regularity for the attractor of the 3D Navier-Stokes-Voight equations, J. Nonlinear Sci., 19 (2009), 133-152. doi: 10.1007/s00332-008-9029-7. Google Scholar
I. Kukavica, R. Temam, V. Vlad and M. Ziane, On the time analyticity radius of the solutions of the two-dimensional Navier-Stokes equations, J. Dynam. Differential Equations, 3 (1991), 611-618. doi: 10.1007/BF01049102. Google Scholar
I. Kukavica, R. Temam, V. Vlad and M. Ziane, Existence and uniqueness of solutions for the hydrostatic Euler equations on a bounded domain with analytic data, C. R. Math. Acad. Sci. Paris, 348 (2010), 639-645. doi: 10.1016/j.crma.2010.03.023. Google Scholar
I. Kukavica and V. Vlad, On the radius of analyticity of solutions to the three-dimensional Euler equations, Proc. Amer. Math. Soc., 137 (2009), 669-677. doi: 10.1090/S0002-9939-08-09693-7. Google Scholar
I. Kukavica and V. Vlad, The domain of analyticity of solutions to the three-dimensional Euler equations in a half space, Discrete Contin. Dyn. Syst., 29 (2011), 285-303. doi: 10.3934/dcds.2011.29.285. Google Scholar
I. Kukavica and V. Vlad, On the analyticity and Gevrey-class regularity up to the boundary for the Euler equations, Nonlinearity, 24 (2011), 765-796. doi: 10.1088/0951-7715/24/3/004. Google Scholar
I. Kukavica and V. Vlad, On the local existence of analytic solutions to the Prandtl boundary layer equations, Commun. Math. Sci., 11 (2013), 269-292. doi: 10.4310/CMS.2013.v11.n1.a8. Google Scholar
J. Lowengrub, E. Titi and K. Zhao, Analysis of a mixture model of tumor growth, European J. Appl. Math., 24 (2013), 691-734. doi: 10.1017/S0956792513000144. Google Scholar
H. Ly and E. Titi, Global Gevrey regularity for the Bénard convection in a porous medium with zero Darcy-Prandtl number, J. Nonlinear Sci., 9 (1999), 333-362. doi: 10.1007/s003329900073. Google Scholar
K. Promislow, Time analyticity and Gevrey regularity for solutions of a class of dissipative partial differential equations, Nonlinear Anal., 16 (1991), 959-980. doi: 10.1016/0362-546X(91)90100-F. Google Scholar
K. Promislow and B. Wetton, PEM fuel cells: A mathematical overview, SIAM J. Appl. Math., 70 (2009), 369-409. doi: 10.1137/080720802. Google Scholar
K. Promislow and Q. Wu, Existence of pearled patterns in the planar functionalized Cahn-Hilliard equation, J. Differential Equations, 259 (2015), 3298-3343. doi: 10.1016/j.jde.2015.04.022. Google Scholar
[38] J. Robinson, Infinite-Dimensional Dynamical Systems: An Introduction to Dissipative Parabolic PDEs and the Theory of Global Attractors, Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 2001. Google Scholar
R. Ryham, F. S. Cohen and R. Eisenberg, A dynamic model of open vesicles in fluids, Commun. Math. Sci., 10 (2012), 1273-1285. doi: 10.4310/CMS.2012.v10.n4.a12. Google Scholar
D. Swanson, Gevrey regularity of certain solutions to the Cahn-Hilliard equation with rough initial data, Methods Appl. Anal., 18 (2011), 417-426. doi: 10.4310/MAA.2011.v18.n4.a4. Google Scholar
S. Torabi, J. Lowengrub, A. Voigt and S. Wise, A new phase-field model for strongly anisotropic systems, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 465 (2009), 1337-1359. doi: 10.1098/rspa.2008.0385. Google Scholar
S. Torabi, S. Wise, J. Lowengrub, A. Ratz and A. Voigt, A new method for simulating strongly anisotropic Cahn-Hilliard equations, MST 2007 Conference Proceedings, 3, 2007. Google Scholar
X. Wang, L. Ju and Q. Du, Efficient and stable exponential time differencing Runge-Kutta methods for phase field elastic bending energy models, J. Comput. Phys., 316 (2016), 21-38. doi: 10.1016/j.jcp.2016.04.004. Google Scholar
S. Wise, J. Kim and J. Lowengrub, Solving the regularized, strongly anisotropic Cahn-Hilliard equation by an adaptive nonlinear multigrid method, J. Comput. Phys., 226 (2007), 414-446. doi: 10.1016/j.jcp.2007.04.020. Google Scholar
Nan Chen, Cheng Wang, Steven Wise. Global-in-time Gevrey regularity solution for a class of bistable gradient flows. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1689-1711. doi: 10.3934/dcdsb.2016018
Georgia Karali, Yuko Nagase. On the existence of solution for a Cahn-Hilliard/Allen-Cahn equation. Discrete & Continuous Dynamical Systems - S, 2014, 7 (1) : 127-137. doi: 10.3934/dcdss.2014.7.127
Xinlong Feng, Yinnian He. On uniform in time $H^2$-regularity of the solution for the 2D Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5387-5400. doi: 10.3934/dcds.2016037
Dimitra Antonopoulou, Georgia Karali. Existence of solution for a generalized stochastic Cahn-Hilliard equation on convex domains. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 31-55. doi: 10.3934/dcdsb.2011.16.31
Changchun Liu, Hui Tang. Existence of periodic solution for a Cahn-Hilliard/Allen-Cahn equation in two space dimensions. Evolution Equations & Control Theory, 2017, 6 (2) : 219-237. doi: 10.3934/eect.2017012
Annalisa Iuorio, Stefano Melchionna. Long-time behavior of a nonlocal Cahn-Hilliard equation with reaction. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 3765-3788. doi: 10.3934/dcds.2018163
Alain Miranville. Existence of solutions for Cahn-Hilliard type equations. Conference Publications, 2003, 2003 (Special) : 630-637. doi: 10.3934/proc.2003.2003.630
Desheng Li, Xuewei Ju. On dynamical behavior of viscous Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2207-2221. doi: 10.3934/dcds.2012.32.2207
Laurence Cherfils, Alain Miranville, Sergey Zelik. On a generalized Cahn-Hilliard equation with biological applications. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2013-2026. doi: 10.3934/dcdsb.2014.19.2013
Álvaro Hernández, Michał Kowalczyk. Rotationally symmetric solutions to the Cahn-Hilliard equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (2) : 801-827. doi: 10.3934/dcds.2017033
Fausto Cavalli, Giovanni Naldi. A Wasserstein approach to the numerical solution of the one-dimensional Cahn-Hilliard equation. Kinetic & Related Models, 2010, 3 (1) : 123-142. doi: 10.3934/krm.2010.3.123
Sergey Zelik, Jon Pennant. Global well-posedness in uniformly local spaces for the Cahn-Hilliard equation in $\mathbb{R}^3$. Communications on Pure & Applied Analysis, 2013, 12 (1) : 461-480. doi: 10.3934/cpaa.2013.12.461
Irena Pawłow, Wojciech M. Zajączkowski. The global solvability of a sixth order Cahn-Hilliard type equation via the Bäcklund transformation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 859-880. doi: 10.3934/cpaa.2014.13.859
Jan Prüss, Vicente Vergara, Rico Zacher. Well-posedness and long-time behaviour for the non-isothermal Cahn-Hilliard equation with memory. Discrete & Continuous Dynamical Systems - A, 2010, 26 (2) : 625-647. doi: 10.3934/dcds.2010.26.625
Maurizio Grasselli, Nicolas Lecoq, Morgan Pierre. A long-time stable fully discrete approximation of the Cahn-Hilliard equation with inertial term. Conference Publications, 2011, 2011 (Special) : 543-552. doi: 10.3934/proc.2011.2011.543
Hirotada Honda. Global-in-time solution and stability of Kuramoto-Sakaguchi equation under non-local Coupling. Networks & Heterogeneous Media, 2017, 12 (1) : 25-57. doi: 10.3934/nhm.2017002
Georgia Karali, Takashi Suzuki, Yoshio Yamada. Global-in-time behavior of the solution to a Gierer-Meinhardt system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2885-2900. doi: 10.3934/dcds.2013.33.2885
Dimitra Antonopoulou, Georgia Karali, Georgios T. Kossioris. Asymptotics for a generalized Cahn-Hilliard equation with forcing terms. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1037-1054. doi: 10.3934/dcds.2011.30.1037
Alain Miranville, Sergey Zelik. The Cahn-Hilliard equation with singular potentials and dynamic boundary conditions. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 275-310. doi: 10.3934/dcds.2010.28.275
S. Maier-Paape, Ulrich Miller. Connecting continua and curves of equilibria of the Cahn-Hilliard equation on the square. Discrete & Continuous Dynamical Systems - A, 2006, 15 (4) : 1137-1153. doi: 10.3934/dcds.2006.15.1137
Kelong Cheng Cheng Wang Steven M. Wise Zixia Yuan | CommonCrawl |
On law enforcement
Yup. You should be an economist.
The crisis in Ferguson has prompted a national dialogue about law enforcement tactics and the unfair targeting of innocents through "tough-on-crime" policies like racial profiling, mandatory minimums, and criminal procedures that make it easier to convict. However, economics tells us that "tough-on-crime" tactics do not always maximize law and order. The unfair targeting of innocent people for criminal investigation through racial profiling and stop-and-frisk style tactics actually increases the incidence of criminality at the margins. Here's how.
A Model in two parts
1 Modeling the incidence of criminality
We consider a model with a large number of households with preferences over consumption [$] C [$] and jail [$] J [$] according to [$] U=E_B\left[u\left(C\right)-J\right] [$] where [$] u [$] is strictly concave increasing in [$] C. [$] The household is endowed with lawful income of [$] y [$] and has the option of committing a burglary to steal [$] B [$] units of consumption. If the household is convicted, he serves jail time that yields [$] J [$] units of disutility (we can think of disutility from jail as being a function [$] j\left(s\left(B\right)\right) [$] where [$] s\left(B\right) [$] is a policy function prescribing sentences based on the magnitude [$] B [$] of the crime, and [$] j\left(\cdot\right) [$] as the utility function of jail time. However, we consider the extensive margin where [$] B [$] is fixed.)
The judicial authority investigates a share [$] q [$] of the population and the investigation leads to a conviction rate of [$] r_1\lt 1 [$] among those who are investigated and committed the crime (that's a sensitivity of [$]r_1[$]), and [$] r_2\lt r_1 [$] among individuals who are investigated but innocent (that's a specificity of [$]1-r_2[$]). Therefore, the probability of being convicted given that the household commits the crime is [$] p_1=q_1r_1 [$] and the probability of conviction given hat the household does not commit the crime is [$] p_2=q_2r_2, [$] where [$]q_1,q_2[$] are the probabilities of investigating an innocent and guilty person respectively, which are functions of the total share of the population [$]q[$] that is investigated, as will be defined in section 2 below. [update: this paragraph has been edited to correct some of the notation]
The household's budget constraint is [$] C\leq y+B [$] if he commits the crime, and [$] C\leq y [$] otherwise. The household choice has two regimes, one where he commits crimes with probability 1, and one where he commits crime with probability 0, where utility from the former is [$] u\left(y+B\right)-p_1 J [$] and the utility from the latter is [$] u\left(y\right)-p_2 J. [$]
Borrowing from the indivisible labor literature1 and assuming a functional form for the utility function, we can rewrite the above in terms of a representative agent that chooses a probability of committing crime [$] \alpha [$] according to
\begin{align*}
\max_{\alpha}~ ln\left(C\right)-\alpha p_1J-\left(1-\alpha\right)p_2J&\\
subject~to~C\leq y+\alpha B&
\end{align*}
Solving yields [$$] \alpha^*=\frac{1}{J\left(p_1-p_2\right)}-\frac{y}{B} [$$] This model captures our intuitions about criminal justice. For example, it is plainly apparent from the solution that increasing penalties [$] J, [$] all else equal, reduces crime rates. It is often assumed that stepping up investigations--that is, targeting a larger share of the population for investigations--will result in a reduction in crime rates. To examine this we take the derivative of [$] \alpha [$] with respect to the investigation rate [$] q: [$] [$$] \frac{\partial \alpha}{\partial q}=\frac{J}{\left(J\left(p_1-p_2\right)\right)^2}\left(\frac{\partial p_2}{\partial q}-\frac{\partial p_1}{\partial q}\right) [$$] which is less than zero if and only if [$$] \frac{\partial p_1}{\partial q}>\frac{\partial p_2}{\partial q}. [$$]
2 Modeling the profiling decision
To understand that last derivative, we need a model of police profiling. Mixing models of heterogeneity with a representative agent framework can be problematic, but lets assume that utilities are such that this is valid. Assume that individuals are heterogeneous in such a way that their probability [$] \tilde{\alpha} [$] of committing a crime--as measured by a hypothetical social planner--is distributed according to a continuously differentiable distribution function [$] F\left(\tilde{\alpha}\right) [$] with support [$] \left[0,1\right]. [$] The judicial authority prioritizes investigations of individuals so that individuals with the highest [$] \tilde{\alpha} [$] probabilities are investigated first, followed by progressively lower probability types until they've exhausted their investigative resources--that is, until the share of the population being investigated equals the policy parameter [$] q. [$] Thus we can write [$$] q\equiv 1-F\left(\bar{\alpha}\right) [$$] where [$] \bar{\alpha} [$] is the lowest probability type to be investigated. Therefore, we have that
p_1&=\underbrace{\left(1-F\left(\bar{\alpha}\right)\right)}_{q}\underbrace{\int^{1}_{\bar{\alpha}}\tilde{\alpha}f\left(\tilde{\alpha}\right)d\tilde{\alpha}}_{\alpha}r_1\\
p_1&=\underbrace{\left(1-F\left(\bar{\alpha}\right)\right)}_{q}\underbrace{\left(1-\int^{1}_{\bar{\alpha}}\tilde{\alpha}f\left(\tilde{\alpha}\right)d\tilde{\alpha}\right)}_{1-\alpha}r_2
where [$] f\left(\tilde{\alpha}\right) [$] denotes the density function of [$] \tilde{\alpha} [$] and therefore the first derivative of [$] F\left(\tilde{\alpha}\right). [$] Hereafter we will write [$] E_\bar{\alpha} [$] to denote [$] \int^{1}_{\bar{\alpha}}\tilde{\alpha}f\left(\tilde{\alpha}\right)d\tilde{\alpha}, [$] which is the expected criminality of the population being investigated.
Thanks to the Leibniz rule, we can differentiate this to get
\frac{\partial p_1}{\partial q}&=E_\bar{\alpha}r_1+\bar{\alpha}\left(1-F\left(\bar{\alpha}\right)\right)r_1\\
\frac{\partial p_2}{\partial q}&=\left(1-E_\bar{\alpha}\right)r_2-\bar{\alpha}\left(1-F\left(\bar{\alpha}\right)\right)r_2
Therefore, using the result derived in the first section, increasing enforcement decreases crime only if
E_\bar{\alpha}+\bar{\alpha}\left(1-F\left(\bar{\alpha}\right)\right)\gt \frac{r_2}{r_1+r_2} \label{conditions}
We can now state two propositions.
Proposition 1.
It is optimal to investigate everyone if and only if [$] E_0\geq \frac{r_2}{r_1+r_2}. [$]
Proof: Sufficiency follows immediately from \eqref{conditions} with [$] \bar{\alpha}=0. [$] Necessity follows from Proposition 2.
If [$] E_0\lt\frac{r_2}{r_1+r_2}, [$] then there exists [$] q^*\gt 0 [$] such that [$] \frac{\partial \alpha}{\partial q}\geq 0 [$] for all [$] q\gt q^*, [$] with strict inequality whenever [$] f\left(\tilde{\alpha}\right)\gt 0. [$] That is, there exists a point beyond which further increasing enforcement actually increases crime.
Proof: Denote [$] G\left(\bar{\alpha}\right)\equiv E_\bar{\alpha}+\bar{\alpha}\left(1-F\left(\bar{\alpha}\right)\right). [$] Then [$] G\left(1\right)=1\gt\frac{r_2}{r_1+r_2} [$] because [$] E_1=1 [$] and [$] F\left(1\right)=1. [$] Moreover, we postulated that [$] G\left(0\right)=E_0\lt \frac{r_2}{r_1+r_2}. [$] Since [$] F\left(\bar{\alpha}\right) [$] is continuously differentiable, [$] G\left(\bar{\alpha}\right) [$] is continuous and by the intermediate value theorem there exists a nonempty set [$] A [$] such that for [$] \hat{\alpha}\in A [$] we have [$] G\left(\hat{\alpha}\right)=\frac{r_2}{r_1+r_2}. [$] Let [$] \bar{\alpha}=\min \hat{\alpha}\in A, [$] and [$] q^*=1-F\left(\bar{\alpha}\right), [$] then for all [$] \tilde{\alpha}\in \left[0,\bar{\alpha}\right) [$] we have [$] G\left(\tilde{\alpha}\right)\leq \frac{r_2}{r_1+r_2} [$] with strict inequality if [$] f\left(\bar{\alpha}\right)\gt 0. [$] Furthermore, note that [$] q\equiv 1-F\left(\tilde{\alpha}\right) [$] is monotonically decreasing in [$] \tilde{\alpha}, [$] and is one-to-one when [$] f\left(\tilde{\alpha}\right)\gt 0, [$] which implies that for [$] q\in \left[0,q^*\right) [$] we must have [$] G\left(\tilde{\alpha}\right)\leq \frac{r_2}{r_1+r_2} [$] with strict inequality when [$] f\left(\bar{\alpha}\right). [$] This concludes the proof.
So what do these propositions say about stop-and-frisk? We are compelled to draw conclusions contrary to the beliefs of the New York City police commissioner: economics tells us that we can actually reduce crime by not investigating those individuals who are least likely to commit crimes, because this will reduce the wrongful conviction rate and increase the incentive to avoid committing crime. Stop-and-frisk policies do precisely the opposite: they target investigations indiscriminately at the public, innocent and guilty alike, which will increase the wrongful convictions and obviate the disincentive our justice system aims to place on criminal acts.
So there you have it. Economics tells us that stop-and-frisk causes crime.
1. I'm probably not the first one to have applied the indivisible labor literature to criminality in this way, though I did not do a search. If you know of any papers that I have incidentally duplicated, let me know so I can give credit here. | CommonCrawl |
Let $P(x) = (x-1)(x-2)(x-3)$. For how many polynomials $Q(x)$ does there exist a polynomial $R(x)$ of degree 3 such that $P\left(Q(x)\right) = P(x)\cdot R(x)$?
The polynomial $P(x)\cdot R(x)$ has degree 6, so $Q(x)$ must have degree 2. Therefore $Q$ is uniquely determined by the ordered triple $(Q(1), Q(2),Q(3))$. When $x = 1$, 2, or 3, we have
\[0 = P(x)\cdot R(x) = P\left(Q(x)\right).\]It follows that $(Q(1), Q(2), Q(3))$ is one of the 27 ordered triples $(i, j, k)$, where $i$, $j$, and $k$ can be chosen from the set $\{1, 2, 3\}$.
However, the choices $(1, 1, 1)$, $(2, 2, 2)$, $(3, 3, 3)$, $(1, 2, 3)$, and $(3, 2, 1)$ lead to polynomials $Q(x)$ defined by $Q(x) = 1$, $2,$ $3,$ $x,$ and $4-x$, respectively, all of which have degree less than 2. The other $\boxed{22}$ choices for $(Q(1),Q(2),Q(3))$ yield non-collinear points, so in each case $Q(x)$ is a quadratic polynomial. | Math Dataset |
Quotient by a torsion group
Let $A$ be a finitely generated abelian group of rank $r$. The rank of the abelian group $A$ is the number of copies of $\mathbb Z$. Let $T$ be the torsion subgroup of $A$. Show that $\frac{A}{T(A)}\cong\mathbb Z^r$.
I don't know if it helps but I've managed to show that all the non-zero elements of $T(A)$ have infinite order.
I'm guessing some usage of the FTFAG will bring out the isomorphism but I don't know how to get rid of the $\frac{\mathbb Z}{n \mathbb Z}$ bits.
abelian-groups
Haikal Yeo
Haikal YeoHaikal Yeo
$\begingroup$ I'm confused. What is your definition of "rank $r$"? If you write $A\simeq T(A)\times \mathbb{Z}^r$, isn't this number by definition the rank in which case there isn't anything to show? $\endgroup$ – Matt Apr 28 '13 at 20:40
$\begingroup$ How have you managed to shown that the elements of $T(A)$ have infinite order?! The torsion subgroup is precisely those elements of finite order in the group... $\endgroup$ – Warren Moore Apr 28 '13 at 20:49
$\begingroup$ $\mathbb{Q}$ is a rank one torsion free abelian group, so the statement is generally false. It is true for finitely generated abelian groups. $\endgroup$ – egreg Apr 28 '13 at 20:50
$\begingroup$ It is true that $T(A)$ is finite, but I don't see why this is relevant to the problem. If it is finitely generated, then by the structure theorem $A\simeq T(A)\times \mathbb{Z}^r$, so killing $T(A)$ just leaves you with $\mathbb{Z}^r$. $\endgroup$ – Matt Apr 28 '13 at 20:58
$\begingroup$ @DonAntonio I know it's not free! It's torsion-free. Probably I should have used a hyphen. $\endgroup$ – egreg Apr 28 '13 at 22:53
To say that the rank of an abelian group is "the number of copies of $\mathbf{Z}$" doesn't quite make sense for abelian groups which are not finitely generated. The rank of an arbitrary abelian group $A$ is by definition the $\mathbf{Q}$-dimension of the $\mathbf{Q}$-vector space $A\otimes_\mathbf{Z}\mathbf{Q}$. As egreg points out in the comments, $\mathbf{Q}$ is an abelian group of rank $1$ without torsion and it is not free of rank one because it is not finitely generated. So it is not true that an abelian group of rank $r$ is free of rank $r$ after taking the quotient modulo the torsion subgroup. For finitely generated abelian groups this is true, and follows, for example, from the structure theorem for finitely generated abelian groups (once one verifies that the number of copies of $\mathbf{Z}$ appearing in a decomposition of a finitely generated abelian group into a direct sum of cyclic groups is equal to the rank as defined above).
Keenan KidwellKeenan Kidwell
$\begingroup$ Thanks. I'll edit to include that $A$ should be finitely generated. $\endgroup$ – Haikal Yeo Apr 28 '13 at 21:09
You can use the following:
according to Fuchs, Theorem 15.5 (which holds in some way even for modules over PID, I suppose) a group is finitely generated iff it is a finite direct sum of cyclic groups. So the torsion part, $T(A)$, is finitely generated as well, i.e. bounded. $T(A)$ is always pure and pure bounded subgroups are direct summands, so it has a direct complement, which is torsion-free (every torsion element is in $T(A)$) and finitely generated, hence again direct sum of cyclic groups, this time each of them is isomorphic to $\mathbb{Z}$, so $C\simeq\mathbb{Z}^n$. You get: $$A/T(A) \simeq T(A) \oplus C/T(A) \simeq C/(T(A) \cap C) \simeq C/0 \simeq C$$ and you're done.
P.S. The elements in $T(A)$ MUST be of FINITE order!
pepa.dvorakpepa.dvorak
Not the answer you're looking for? Browse other questions tagged abelian-groups or ask your own question.
Torsion subgroup
Isomorphism between a quotient group and the 2-torsion subgroup
Subgroups of a finitely generated abelian group without torsion
Torsion subgroup quotient
Endomorphism rings and torsion subgroups.
Infinite abelian group with every proper nontrivial subgroup being free abelian
Classification of finite rank Abelian groups
An example of a finite rank torsion free abelian group which is not finitely generated
Let $A$ be an abelian finitely generated free group and $A/B$ be a torsion group. Show that $rank(A)=rank(B)$.
Torsion-free abelian groups of finite rank and a subgroup of finite index (Fuchs' problem) - self study | CommonCrawl |
The Merli–Missiroli–Pozzi Two-Slit Electron-Interference Experiment
Rodolfo Rosa1,2
Physics in Perspective volume 14, pages178–195(2012)Cite this article
In 2002 readers of Physics World voted Young's double-slit experiment with single electrons as "the most beautiful experiment in physics" of all time. Pier Giorgio Merli, Gian Franco Missiroli, and Giulio Pozzi carried out this experiment in a collaboration between the Italian Research Council and the University of Bologna almost three decades earlier. I examine their experiment, place it in historical context, and discuss its philosophical implications.
The Most Beautiful Experiment in Physics
In May 1974 the Italian physicists Pier Giorgio Merli, Gian Franco Missiroli, and Giulio Pozzi (figure 1) submitted an article to the American Journal of Physics entitled "On the statistical aspect of electron interference phenomena," which was published in March 1976.1 They obtained an interference pattern with an electron microscope that was fitted with a special interferometer, an electron biprism, that consisted basically of a very thin wire oriented perpendicularly to the electron beam and positioned symmetrically between two plates at ground potential, so that when a positive or negative potential was applied to the wire the electron beam was split into two deflected components. Their use of this electron biprism was the first important technical and conceptual feature of their experiment; the second was its ability to observe the continuous arrival of the electrons, one at a time, on a television monitor. Together with Lucio Morettini and Dario Nobili, the trio also produced a 16-millimeter movie entitled Interferenza di elettroni (Interference of Electrons) that was awarded first prize in the Physics Section of the VII Scientific and Technical Cinema Festival in Brussels in 1976.Footnote 1
Giulio Pozzi (b. 1945) and Gian Franco Missiroli (b. 1933), professors in the Department of Physics of the University of Bologna, Italy, and Pier Giorgio Merli (1943–2008), experts on electron microscopy as they appeared in the newspaper Sole 24 ore on September 6, 2003. Pozzi has done pioneering research in interferometry. Missiroli has been deeply engaged in educational research, which prompted him to initiate the work that led to the Merli–Missiroli–Pozzi experiment. Merli was President of the Italian Society of Electron Microscopy from 1984 to 1987 and Director of the LAMEL-CNR Institute in Bologna from 1992 to 1998; he died on February 24, 2008. Credit: Photograph by Pino Guidolotti
Twenty-six years later, in September 2002, Physics World published the results of a survey in which readers were asked to name the most beautiful experiment in physics of all time. They voted the following experiments as the top ten: (1) Young's double-slit experiment applied to the interference of single electrons; (2) Galileo's experiment on falling bodies (1600s); (3) Millikan's oil-drop experiment (1910s); (4) Newton's decomposition of sunlight with a prism (1665–1666); (5) Young's light-interference experiment (1801); (6) Cavendish's torsion-bar experiment (1798); (7) Eratosthenes's measurement of the Earth's circumference (3rd century BC); (8) Galileo's experiments with rolling balls down inclined planes (1600s); (9) Rutherford's discovery of the nucleus (1911); and (10) Foucault's pendulum (1851).2 Historian-philosopher Robert P. Crease, who proposed the survey, commented that:
The double-slit experiment with electrons possesses all of the aspects of beauty most frequently mentioned by readers…. It is transformative, being able to convince even the most die-hard sceptics of the truth of quantum mechanics…. It is economical: the equipment is readily obtained and the concepts are readily understandable, despite its revolutionary result. It is also deep play: the experiment stages a performance that does not occur in nature, but unfolds only in a special situation set up by human beings. In doing so, it dramatically reveals–before our very eyes–something more than was put into it.3
In sketching the historical background to this beautiful experiment, Peter Rodgers, Editor of Physics World, asked:
[Who] actually carried out the experiment? Standard reference books offer no answer to this question but a search through the literature does reveal several unsung experimental heroes.4
Rodgers's list of unsung heroes began with Geoffrey Ingram Taylor, who in 1909 obtained interference fringes using a light source that was so weak that only very few "indivisible units" (later, photons) struck a photographic plate.5 Nearly a half-century later, in 1955, Gottfried Möllenstedt and Heinrich Düker used their invention of the electron biprism to obtain interference fringes with an electron microscope,6 and six years after that Claus Jönsson carried out electron-interference experiments with up to five slits of width 3 × 10−7 meter.7 Rodgers failed to mention Merli, Missiroli, and Pozzi's 1974 experiment, but he did call special attention to the
milestone … experiment in which there was just one electron in the apparatus at any one time [which was carried out] by Akira Tonomura and co-workers at Hitachi in 1979 [sic, 1989] when they observed the build-up of the fringe pattern with a very weak electron source and an electron biprism.8
Tonomura and his coworkers carried out their experiment at the Advanced Research Laboratory in Hitachi, Tokyo, and published their 1989 paper in the American Journal of Physics. 9 In it they gave the impression that they were the first to demonstrate the formation of interference fringes by single electrons.
Rodgers's account was challenged eight months later, in May 2003, by John Steeds, at that time Head of the Department of Physics at the University of Bristol, who had seen a preliminary version of Merli, Missiroli, and Pozzi's 1976 movie. As Steeds wrote in a Letter to the Editor of Physics World:
I believe that the first double-slit experiment with single electrons was performed by Pier Giorgio Merli, Gian Franco Missiroli and Giulio Pozzi in Bologna in 1974–some 15 years before the Hitachi experiment. Moreover, the Bologna experiment was performed under very difficult experimental conditions: the intrinsic coherence of the thermionic electron source used by the Bologna group was much lower than that of the field-emission source in the Hitachi experiment.10
Merli, Missiroli, and Pozzi themselves then pointed out in a Letter to the Editor of Physics World that Tonomura and his coworkers did not cite their 1976 paper in the American Journal of Physics,11 as Greyson Gilson had already noted after the publication of the Hitachi group's paper in 1989.12 The Italian trio further pointed out that Tonomura and his coworkers included only an incorrect reference to their 1976 movie, and did not even mention that it shows the arrival of single electrons, one after the another, on their television monitor. The referees of the Hitachi group's 1989 paper were evidently unaware that Merli, Missiroli, and Pozzi's paper also had been published in the American Journal of Physics thirteen years earlier.
The Hitachi version of the experiment was indeed excellent,13 but in 2003 Akira Tonomura was still reluctant to grant Merli, Missiroli, and Pozzi priority, as indicated in his reply to Steeds's Letter to the Editor of Physics World:
We believe that we carried out the first experiment in which the build-up process of an interference pattern from single electron events could be seen in real time as in Feynman's famous double-slit Gedanken experiment. This was under the condition, we emphasize, that there was no chance of finding two or more electrons in the apparatus.14
American physicist Mark P. Silverman, who was personally involved in the Hitachi experiment, based his discussion of electron interference exclusively on that experiment in his 1993 and 1995 books,15 making no reference to Merli, Missiroli, and Pozzi's much earlier experiment, although he acknowledged that:
The Hitachi experiment is not the first of its kind (although it was the first I had personally witnessed), but rather one of the last and most conclusive in a line of analogous experiments dating back to just a few years after Einstein proposed the existence of photons.16
There can be no doubt, however, that Merli, Missiroli, and Pozzi carried out the first conclusive double-slit single-electron interference experiment.
I have dwelled on this question of priority not for parochial reasons, but to emphasize the vital role of experiment in the history of science. Philosopher Ian Hacking, for example, criticized philosophers who "[by] legend and perhaps by nature … are more accustomed to the armchair than the workbench,"17 and hence reflect "the standard preference for hearing about theory rather than experiment."18 According to Hacking:
History of the natural sciences is now almost always written as a history of theory. Philosophy of science has so much become philosophy of theory that the very existence of pre-theoretical observations or experiments has been denied.19
My aim is to help rectify this alleged imbalance.
The Merli–Missiroli–Pozzi Experiment
Merli, Missiroli, and Pozzi have provided a complete description of their experimental apparatus,20 as shown in figure 2. S is the effective electron source, in other words, not the real source of the electrons, which are emitted thermionically by a hot filament about 36 centimeters above S, and by means of a system of condenser lenses are focused on an area whose diameter can be reduced to approximately 6 millimeters, thus effectively making S a monochromatic point source of electrons. The biprism wire F (radius r = 2 × 10−7 meter) is at a distance a = 10 centimeters below S and is 2 millimeters away from each of two opposing plates at ground potential. When a voltage V is applied to the biprism wire, an electric field is produced that is equivalent to one produced by a cylindrical condenser of external radius R (slightly smaller than the distance between the two opposing plates) and internal radius r. Merli, Missiroli, and Pozzi showed that when an electron of charge e, mass m, and speed v 0 passes the biprism wire F at a distance x away from it, it will be deflected through an angle α given by:21
Schematic diagram (not to scale) of Merli, Missiroli, and Pozzi's electron-biprism experimental apparatus. Electrons emerge as if from the effective source S (or from the virtual sources S 1 and S 2), are diffracted by the biprism wire F when at a potential V, and interfere inside the region W on the observation plane OP or strike it as particles outside of it. The system of lenses represented by the Ls magnify the image on the observation plane OP onto the viewing plane VP. Source: Adapted from Merli, Missiroli, and Pozzi, "Diffrazione e interferenza" (ref. 23), p. 87, Fig. 3
$$ \alpha = \frac{2eV}{{mv_{0}^{2} \ln \left( {2/R} \right)}}\tan^{ - 1} \frac{{\left( {R^{2} - x^{2} } \right)^{1/2} }}{x}.$$
If the voltage V is positive (converging biprism), the electron will be deflected toward the wire; if V is negative (diverging biprism), it will be deflected away from the wire.
In the overlapping (hatched) region, Merli, Missiroli, and Pozzi state that "a non-localized interference pattern will be produced,"22 non-localized because the interference pattern spans the entire overlapping region, but to see it a fluorescent screen, for example, must be placed in the observation plane OP at a distance b below the biprism wire, where fringes are formed in the interference field of width W given by:
$$ W = 2\left| {\frac{a + b}{a}} \right|\,\left( {\alpha \frac{ab}{a + b} - r} \right). $$
Inserting numbers, with α = 5 × 10−5 radian, r = 2 × 10−7 meter, a = 10 centimeters, and b = 24 centimeters,23 it follows that W = 23 × 10−6 meter. To make the fringes on the observation plane OP visible, however, a system of lenses represented by the Ls is required to enlarge them (240 times), which enables them to be seen on the viewing plane VP with the naked eye or on a television monitor or to be recorded on a photographic plate.24
As noted above, Möllenstedt and Düker reported experiments in which they obtained interference fringes using an electron microscope;25 Merli, Missiroli, and Pozzi replaced their photographic plate with an image intensifier, which converts an electronic image into an optical image that is 200 times brighter than the image that would be seen on a fluorescent screen in the observation plane OP. The optical image is then transmitted by optic fibers to the photocathode of a SEC (Secondary Electron Conduction) tube that is connected through a video amplifier and control unit to a television monitor. The SEC tube can retain electrostatic charges for a relatively long period of time even after the electron beam has been switched off, which permits the observation of extremely low intensities (one electron at a time) for as long as it takes for the image to be formed. The shortest storage time that Merli, Missiroli, and Pozzi achieved with their image intensifier was 0.04 second,Footnote 2 which enabled them to operate with such low electron-current densities that only one electron, or very few electrons, were seen as one or more tiny white dots on their television monitor. Then, by increasing the storage time to on the order of minutes, electrons striking certain areas of the viewing plane VP could be seen arriving one at a time, so that fringes began to appear after the arrival of thousands of electrons. In this way, Merli, Missiroli, and Pozzi saw, for the first time, the formation of electron-interference fringes with increasing electron-current densities, as shown in figure 3.26 Every electron hits the television monitor at a precise spot, like a particle, as revealed by the dot of light it produces, but the cumulative behavior of many electrons (even when they are transferred one by one from the emitting filament to the television monitor) shows a wave-like pattern.
The formation of electron-interference fringes inside the interference region W and particle dots outside it with increasing electron-current densities as seen on a television monitor in the viewing plane VP. Source: Merli, Missiroli, and Pozzi, "On the statistical aspect" (ref. 1), p. 306, Fig. 1
As indicated above, if we know the angle of deflection α of an electron emitted from the effective source S and then passes the biprism wire F, we can calculate its point of arrival on the observation plane OP. However, the computed distribution of many such points is not identical to the distribution of the electrons that we actually observe; in other words, the computed distribution does not reproduce the fringes in the interference field W on the observation plane OP. To reproduce these fringes, we have to introduce the electron's wave behavior; we have to introduce de Broglie waves. To do so, note that the system illustrated in figure 2 is equivalent to a Fresnel optical biprism: It is as if the electrons were emitted from two virtual point sources, S 1 and S 2, positioned symmetrically on each side of, and in the same plane as the effective source S. The separation of the two virtual sources is d = 2αa, so that by introducing the de Broglie wavelength λ, we find that the fringes in the interference field W have a periodicity l = λ(a + b)/d. This optical analogy is useful for understanding the parameters in the Merli–Missiroli–Pozzi experiment, but I should note that other models also have been proposed, some more complex than others, in which quantum–mechanical equations are used directly to explain the observed phenomena.27
Three Comments
I first note that Merli, Missiroli, and Pozzi, when discussing the technical specifications and operation of their image intensifier in their 1976 article,28 cite a paper that K.H. Hermann and his coworkers presented at the International School of Electron Microscopy in Erice, Sicily, in April 1970,29 which Merli and Pozzi also attended. In it Hermann and his coworkers illustrated a number of experiments using a Siemens image intensifier that showed the formation of Fresnel interference fringes when an electron-current density of 10−15 amperes per square centimeter passed through a tiny hole in a carbon film,30 so that with a storage time of 0.04 second "only the signals of individual electrons are visible."31 Then, by increasing the storage time up to 120 seconds, they observed directly how the fringes took shape.32 Their experiments were designed mainly to show the technical potential of the Siemens image intensifier, and as such were of interest only to electron microscopists; the broader scientific community failed to grasp their fundamental physical importance. Nonetheless, they were of substantial influence on Merli, Missiroli, and Pozzi's later single-electron experiments, as they themselves pointed out.33
I note, secondly, that the electron-biprism experiment differs in important respects from a traditional double-slit experiment. In the former, there are no real slits, and both the wave and the particle natures of the electron are observed in the same experiment. The statement that "the electron passed through slit 1 (or 2)" is replaced by the statement that "the electron passed to the left (or right) of the wire" or, in the optical analogy, that "the electron was emitted by the virtual source S 1 (or S 2)." Interference fringes form only in the overlapping region W of the observation plane OP, which contains electrons that passed on both sides of the wire. The equation for the angle of deflection α does not envision the formation of interference fringes on the observation plane OP inside the interference field W; it predicts the point of arrival of one electron outside of the interference field W. More precisely, the observation plane OP contains a region A within which the electrons deflected by the biprism's wire arrive; within region A is the region W in which the interference pattern forms. Electrons continue to arrive outside W, and their angles of deflection and hence trajectories can be calculated. The broader region A can be enlarged onto the viewing plane VP, and using the image intensifier it can be observed on the television monitor. Note, in fact, that Merli, Missiroli, and Pozzi's photographic images clearly show, as seen in figure 3, a number of white dots produced by electrons that have been deflected outside of the region W in which the interference fringes are formed. Today, the width W of the interference region is routinely set, thus leaving a region outside it in which one can think in terms of classical particle trajectories. In the single-electron experiment, if an electron arrives at a point x = P 1 – ε (where ε is the experimental limit of resolution), we may say that it passed to the left of the biprism wire, that is, its trajectory is perfectly specified; if, however, it arrives at a point x = P 1 + ε, then its trajectory (if it now even makes sense to use this term) cannot be specified. This highlights the point that, in the same experiment, a transition takes place continuously, as it were, from its description in classical terms to its description in quantum terms.
I note, finally, that concerning the option of observing the electron either within or outside the interference region W after it has interacted with the biprism wire, when we establish its potential V, the width of region W depends on the distance b, which we can choose after the electron has passed the biprism wire. Thus, we can choose the width of the interference region W in which the electron reveals its wave-like nature after it has interacted with the biprism wire. This experimental variation, although yet to be tested, is reminiscent of the "delayed choice" that John Archibald Wheeler proposed in 1977 in a Gedanken experiment.34
A Crucial Experiment
Merli, Missiroli, and Pozzi's experiment was a crucial experiment because it demonstrated empirically that electrons are not (only) waves, and not (only) particles. It also is a paradigmatic exemplar of the frequentist interpretation of probability. Thus, from an operational point of view, to determine the probability of an electron reaching a given point x on a screen means counting the number of electrons within a radius of, say, dx around x, relative to the total number of electrons that reached the screen. This count is performed, for example, by using a microdensitometer to measure the blackening of a photographic film in a direction perpendicular to the interference fringes to obtain an intensity curve like the one shown in figure 4.35 According to Merli, Missiroli, and Pozzi:
The intensity curve of the electron-interference fringes as measured by a microdensitometer to record the blackening of a photographic plate in the viewing plane VP. Source: Merli, Missiroli, and Pozzi, "Diffrazione e interferenza" (ref. 23), p. 97, Fig. 10
This curve, which is familiar to us from the study of the intensity resulting from the interference of two wave-like perturbations, in this case indicates the number n of electrons that have hit the various regions of the photographic plate. Thus, if N is the total number of incident electrons, the curve enables us to derive the fraction of them that is distributed in the various different positions. If this curve refers to a single electron, then it will show the probability the electron has of arriving at one point rather than at another.36
Merli, Missiroli, and Pozzi thus clearly support a frequentist interpretation of probability.
That the single-electron experiment demonstrates that the interference pattern results from the accumulation of single events, as for example in the case of a Gaussian distribution, seems to lend support to philosopher Karl R. Popper's claim that:
[What] I call the great quantum muddle consists in taking a distribution function, i.e. a statistical measure function characterizing some sample space (or perhaps some "population" of events), and treating it as a physical property of the elements of the population. It is a muddle: the sample space has hardly anything to do with the elements.37
Popper's muddle thus consists in mistaking the physical properties of the elements in a statistical distribution for its distributive properties. Thus, in the single-electron double-slit experiment, the muddle is that because the observed distribution is the same as that of light in optical-interference experiments, this reflects the nature of the electrons producing the distribution. This, in turn, means admitting the existence of a real wave (or wave packet) of a known physical entity, that is, an electromagnetic wave, which in some way is linked to the electron. The formation of fringes thus could be explained if we hypothesize that the electron reveals: (a) its particle nature during emission; (b) its wave nature in the experimental apparatus; and (c) its particle nature again at the screen. This hypothesis cannot apply to the single-electron double-slit experiment, however. As Merli, Missiroli, and Pozzi wrote:
The fringes of interference (and of diffraction) are not due to the fact that the electron is spatially distributed in a continuous manner and becomes a wave (in fact, if this had been the case we would have had fringes of decreasing intensity as the current decreased).38
Instead, as the intensity of the electron-beam current was reduced, the number of electrons reaching the screen in a given interval of time also fell.
In the Merli–Missiroli–Pozzi experiment, the events are independent of each other because only one electron at a time passes through the biprism: On average, the electrons are separated from each other by 10 meters,39 which means that a given electron hits the screen after the preceding electron had been absorbed in it. I emphasize that this aspect of their experiment, which they achieved for the first time, is of crucial importance because, first and foremost, it excludes the possibility that the fringes were in some way produced by an interaction of the electrons inside the biprism apparatus. It also excludes the possibility that such an interaction occurred in the photographic plate or other detector.
By contrast, Patrick Suppes and Jose Acacio de Barros explained the interference and diffraction of photons on the basis of the following hypotheses:
(i) Photons are emitted by harmonically oscillating sources. (ii) They have definite trajectories. (iii) They have a probability of being scattered at a slit. (iv) Detectors, like sources, are periodic. (v) Photons have positive and negative states which locally interfere, i.e., annihilate each other, when being absorbed.40
The Merli–Missiroli–Pozzi experiment proves that, for electrons, there cannot be any kind of destructive interference involving the detector, because they never interact on their journey to, or arrival at the detector. Further, Suppes and Acacio de Barros assumed "that the absorber, or photodetector, itself behaves periodically with a frequency ω,"41 but in the Merli–Missiroli–Pozzi experiment the absorber is a well-defined macroscopic device, a photographic plate or an image intensifier, which is totally devoid of periodic oscillations. Moreover, the source of electrons in it is an image of very small diameter produced by a lens system that collects the electrons after they have been emitted thermionically by an incandescent point filament—which involves no periodicity whatsoever. In any case, since the probability of two or more electrons being present simultaneously between the source and detector is negligible, they experience no significant interaction at any time in the entire apparatus.
Suppes and Acacio de Barros, of course, focused on photons, not electrons. Indeed, the Berlin experimentalists Gerhard Simonsohn and Ernst Weihreter pointed out that in double-slit experiments the similarity between photons and electrons, although frequently noted, is valid "only in a restricted sense."42 Nevertheless, Merli, Missiroli, and Pozzi's experiment disproved empirically that Suppes and Acacio de Barros's hypotheses cannot apply to electrons. The Italian trio developed and described all of its technical details in such a way as to leave no room for ambiguity or for any ad hoc hypotheses that cannot be tested experimentally. Therefore, their experiment, which is a real experiment, should be borne in mind when new hypotheses are advanced on the basis of Gedanken experiments involving either electrons or photons.
Philosophical Implications
The two-slit experiment is central to interpretations of quantum mechanics. Albert Einstein and Niels Bohr often focused on it in their long debate over the completeness of quantum mechanics beginning in 1927.43 Much later, in the 1990s, the question of whether Werner Heisenberg's uncertainty relations derive from Bohr's principle of complementarity, or vice versa, arose,44 and philosophers entered the debate: Suppes and Acacio de Barros, as we have seen, derived the phenomena of photon interference and diffraction on the basis of certain hypotheses on their emission, absorption, and interaction;45 Arthur Fine argued that the two-slit experiment, when analyzed correctly, confirms the validity of the "classical" theory of probability even in the microworld;46 Karl R. Popper argued that it leads to a new interpretation of probability that is connected ontologically to the introduction of a new physical property, propensity;47 and Peter Milne argued that it provides proof of the inadequacy of such proposals.48 In general, the Merli–Missiroli–Pozzi experiment, which today can be carried out with microscopic objects (electrons, photons, neutrons, and atoms) and with mesoscopic systems such as fullerene molecules,49 did not prompt any fundamental rethinking of the interpretation of quantum mechanics, but I shall argue that it should have engendered philosophical reflection and debate.
In 1970 Leslie E. Ballantine published a classical article on the statistical interpretation of quantum mechanics in which he treated the wave function not as a physical entity, but simply as a mathematical device for calculating probability; the wave-like pictures are epiphenomena produced by the impacts of particles.50 Merli, Missiroli, and Pozzi's single-electron experiment would seem to support Ballantine's view, at least at first glance: The observed image that gradually appears on the television monitor is produced by single electrons, and after a sufficient number of them appear their probability distribution is the same as that of the intensity of light in a corresponding optical experiment. Still required, however, was a physical explanation for the behavior of the particles that give rise to these images, for which supporters of the statistical interpretation leaned on the Duane-Landé theory of interference and diffraction.51
Thus, in 1923 William Duane attempted to explain the diffraction of X rays in crystals by introducing a third quantum rule, one for linear momentum, according to which a crystal with a periodicity l in a certain direction can change its momentum p in this direction by an amount Δp = h/l, where h is Planck's constant. Four decades later, in 1965, Alfred Landé, by taking the law of conservation of momentum into account, used Duane's rule to derive the Bragg law of X-ray diffraction. He argued that:
The incident particles do not have to spread like waves…; they stay particles all the time. It is the crystal with its periodic lattice planes which is already spread out in space and as such reacts under the third quantum rule.52
Landé extended this reasoning to an ideal double-slit experiment, concluding that the slit screen reacts to electrons incident on it as a mechanical unit, a "whole solid body," in such a way that it transfers quantized momentum to the electrons, the collective action of which results in their interference behavior.53 The interference of the electrons therefore is not due to a quality inherent in them, but to the quantum-mechanical activity of the diffractor, such as a crystal or a screen with two slits in it.
The Duane-Landé theory, however, is not capable of explaining the results of the Merli–Missiroli–Pozzi experiment, because the interference image in it is obtained with no mechanical transfer of momentum to or from the biprism apparatus. In fact, its "slits" are only virtual slits, and there is nothing mechanical about the formation of the interference fringes. As members of the Bologna group, in describing an electron-biprism interference experiment (this time not with single electrons), wrote in 1973:
In interference experiments it is not necessary to introduce the concepts of interaction between electrons and atoms, regular distribution of atoms in crystalline lattice[s], their dimensions, etc., as for diffraction experiments, but the splitting and superposition of the electron beam is achieved by macroscopic fields without any interaction of the electron with the material. 54
The Merli–Missiroli–Pozzi experiment demonstrates, in fact, that although at first sight it seems to support the statistical interpretation of quantum mechanics, its detailed experimental arrangement proves that the opposite is true, since to explain wave-particle dualism the statistical interpretation invariably has to resort to a model based on a transfer of mechanical momentum.
In 1999 Ballantine, explicitly referring to the single-electron experiment (the one conducted by the Hitachi group), advanced two explanations for the wave-like behavior of electrons, one based on the wave-particle duality, the other on the "quantized momentum transfer to and from" a periodic object like a crystal lattice.55 As in his 1970 article, he considered the latter explanation to be simpler because it does not appeal to any hypotheses about the wave-like nature of the electron, and he therefore employed Occam's razor to prefer it. Regarding Popper's propensity interpretation of probability,56 the problem basically comes down to the necessity to resolve the connection between the meaning of the probability of a single event and the relative frequency of its probability.
Despite the absence of any explicit reference to these philosophical problems, Merli, Missiroli, and Pozzi clearly revealed the tension between the necessity of assigning to an individual electron the probability it has of reaching a given point on a photographic plate, and the necessity of acknowledging interference fringes as a statistical distribution of relative frequencies.57 Moreover, they emphasized that interference must be perceived as resulting from the interaction of a single electron within the experimental apparatus, that is, of the "generating conditions" underlying the intensity distribution:
[The] electron is a particle that reaches a clearly identifiable point on the screen, exposing a single grain of the photographic emulsion, and the interference pattern is the statistical result of a large number of electrons….
Thus we may conclude that the phenomenon of interference is exclusively the consequence of the interaction of the individual electron within the experimental apparatus. 58
In short, in the Merli–Missiroli–Pozzi experiment the observed system is a single electron, and its result is the product of single events. Probability thus has to be assigned to a single event.
I stress, finally, that the crucially important feature of the Merli–Missiroli–Pozzi experiment consists essentially in showing the empirical meaning of the probability of a single event within the experimental context of quantum mechanics. In microphysical experiments, we check, for example, whether or not a statistical distribution conforms to theoretical expectations, so frequencies themselves are seen as the sole constituents of probability. In the single-electron experiment, this is turned on its head. The focus now is on the individual particle, in that there are empirical grounds for enquiring about the probability that a single electron will reach a certain point on a screen after the arrival of the preceding electron, even after the apparatus has been switched off. The Merli–Missiroli–Pozzi experiment excludes the possibilities that the interference fringes are due to (i) a real (electromagnetic) wave (or wave packet) that is in some way associated with the electron, (ii) the interaction between one electron and another electron, (iii) any specific characteristics of the electron source, and (iv) to a transfer of momentum from the slit screen to the electron. The only remaining explanation is to regard probability as a physical property that is revealed in the single-electron case. In sum, the Merli–Missiroli–Pozzi experiment may be particularly significant philosophically in regard to the role of probability in quantum mechanics.
Pier Giorgio Merli, Gian Franco Missiroli, and Giulio Pozzi never received any official award from the University of Bologna, from the Italian Research Council (Consiglio Nazionale delle Ricerche, CNR), or from any Italian civic or scientific institution, although they brought great credit to all of these institutions.Footnote 3 However, after Merli's death in February 2008, some of his friends established the website <http://l-esperimento-piu-bello-della-fisica.bo.imm.cnr.it/english/index.html>, where anyone can learn how the Merli–Missiroli–Pozzi experiment was constructed and performed, and that it "also aims at clarifying the scientific and personal motivations and conditions which allowed the team of Italian physicists to perform the experiment successfully, giving a brilliant contribution to fundamental research in the field of physics." One also can hear Giulio Pozzi explain how the thin biprism wire was prepared. Giorgio Lulli ([email protected]) supervises the website and is prepared to answer questions about the experiment. He also organized a project to produce a remastered version of the original film, Interferenza di elettroni, on a DVD as well as a documentary film (directed by Dario Zanasi and Diego L. Gonzalez) on the Merli–Missiroli–Pozzi experiment that shows the scientific, historical, and human factors involved in its realization. Giorgio Matteucci has described and reproduced subsequent electron experiments performed by the Bologna group,59 including ones analogous to the optical experiments performed in 1818 that showed the existence of Fresnel zones and the Poisson spot.
Lucio Morettini, who directed the movie, died in 2005; he was a member of the Department of Physics at the University of Modena and was in charge of the Department of Scientific Cinematography of the LAMEL-CNR Institute in Bologna. Dario Nobili was Director of the LAMEL-CNR Institute from 1977 to 1987; he strongly encouraged Merli, Missiroli, and Pozzi to produce the movie and took part in its realization.
The storage time of the image intensifier plays a role similar to that of the exposure time of a photographic plate.
Outside of Italy, Merli, Missiroli, and Pozzi, and their colleagues Oriano Donati and Giogrio Matteucci joined Enrico Fermi as the very few Italian physicists whose papers were nominated by readers for membership on the "AJP All-Star Team"; see Robert H. Romer, "Editorial: Memorable papers from the American Journal of Physics, 1933-1990," Amer. J. Phys. 59 (1991), 201-207, on 204.
P. G. Merli, G. F. Missiroli, and G. Pozzi, "On the statistical aspect of electron interference phenomena," American Journal of Physics 44 (1976), 306–307.
Robert P. Crease, "The most beautiful experiment," Physics World 15 (September 2002), 19-20, on 20.
Peter Rodgers, "The double-slit experiment," Physics World 15 (September 2002), 15.
G. I. Taylor, "Interference fringes with feeble light," Proceedings of the Cambridge Philosophical Society 15 (1909), 114–115, on 114.
G. Möllenstedt and H. Düker,"Fresnelscher Interferenzversuch mit einem Biprisma für Elektronenwellen," Die Naturwissenschaften 42 (1955), 41.
Claus Jönsson, "Elektroneninterferenzen an mehreren künstich hergestellten Feinspalten," Zeitschrift für Physik 161 (1961), 454–474.
Rodgers, "The double-slit experiment" (ref. 4).
A. Tonomura, J. Endo, T. Matsuda, T. Kawasaki, and H. Ezawa, "Demonstration of single-electron buildup of an interference pattern," Amer. J. Phys. 57 (1989), 117-120.
John Steeds, "The double-slit experiment with single electrons," Physics World 16 (May 2003), 20.
Pier Giorgio Merli, Giulio Pozzi, and GianFranco Missiroli, ibid.
Greyson Gilson, "Demonstrations of Two-Slit Electron Interference," Amer. J. Phys. 57 (1989), 680.
Tonomura, Endo, Matsuda, Kawasaki, and Ezawa, "Demonstration" (ref. 9).
Akira Tonomura, "The double-slit experiment with single electrons," Physics World 16 (May 2003), 20-21, on 21.
Mark P. Silverman, And Yet It Moves: Strange Systems and Subtle Questions in Physics (Cambridge: Cambridge University Press, 1993), pp. 6-12; idem, More Than One Mystery: Explorations in Quantum Interference (New York: Springer-Verlag, 1995), pp. 1-8.
Silverman, And Yet It Moves (ref. 15), p. 12.
Ian Hacking, Representing and Intervening: Introductory Topics in the Philosophy of Natural Science (Cambridge: Cambridge University Press, 1983), p. 150.
Ibid., pp. 149–150.
P.G. Merli, G.F. Missiroli, and G. Pozzi, "Electron interferometry with the Elmiskop 101 electron microscope," Journal of Physics E: Scientific Instruments 7 (1974), 729–732; G.F. Missiroli, G. Pozzi, and U. Valdrè, "Electron interferometry and interference electron microscopy," ibid. 14 (1981), 649–671.
Missiroli, Pozzi, and Valdrè, "Electron interferometry" (ref, 20), p. 654; see also Jiří Komrska, "Scalar Diffraction Theory in Electron Optics," in L. Marton, ed., Advances in Electronic and Electron Physics, Vol. 30 (New York and London: Academic Press, 1971), pp. 139–234, on pp. 218-230.
Missiroli, Pozzi, and Valdrè, "Electron interferometry" (ref, 20), p. 654.
P.G. Merli, G.F. Missiroli, and G. Pozzi, "Diffrazione e interferenza di elettroni. II.– Interferenza," Giornale di Fisica 17 (1976), 83–101, on 89.
Möllenstedt and Düker,"Fresnelscher Interferenzversuch" (ref. 6).
Merli, Missiroli, and Pozzi, "On the statistical aspect" (ref. 1), p. 306.
See, for example, Missiroli, Pozzi, and Valdrè, "Electron Interferometry' (ref. 20), pp. 653-657.
Merli, Missiroli, and Pozzi, "On the statistical aspect" (ref. 1).
K.-H Hermann, D. Krahl, A. Kübler, K.-H Müller, and V. Rindfleisch, "Image amplification with television methods," in U. Valdrè, ed., Electron Microscopy in Material Science (New York and London: Academic Press, 1971), pp. 252–272.
Ibid., p. 267, Fig. 21.
P.G. Merli, G.F. Missiroli, and G. Pozzi, "L'esperimento di interferenza degli elettroni singoli," Il Nuovo saggiatore 19, No. 3-4 (2003), 37–40, on 38-39.
John Archibald Wheeler, "The 'Past' and the 'Delayed-choice' Double-slit Experiment," in A.R. Marlow, ed., Mathematical Foundations of Quantum Theory (New York, San Francisco, London: Academic Press, 1978), pp. 9–48.
Merli, Missiroli, and Pozzi, "Diffrazione e interferenza" (ref. 23), p. 97.
Ibid., p. 96.
Karl R. Popper, "Quantum Mechanics without 'The Observer'," in Mario Bunge, ed., Quantum Theory and Reality (New York: Springer-Verlag, 1967), pp. 7–44, on p. 19; reprinted with revisions and additions, in Karl R. Popper, Quantum Theory and the Schism in Physics [From the Postscript to The Logic of Scientific Discovery] (Totowa, N.J.: Rowman and Littlefield, 1982), pp. 35-95, on p. 52.
Patrick Suppes and J. Acacio de Barros, "A Random-Walk Approach to Interference," International Journal of Theoretical Physics 33 (1994), 179–189; idem, "Diffraction with Well-Defined Photon Trajectories: A Foundational Analysis," Foundations of Physics Letters 7 (1994), 501–514, on p. 501.
G. Simonsohn and E. Weihreter, "The double-slit experiment with single-photoelectron detection," Optik 57 (1979), 199–208; on 203.
Max Jammer, The Philosophy of Quantum Mechanics: The Interpretations of Quantum Mechanics in Historical Perspective (New York: John Wiley & Sons, 1974), pp. 109-158.
See, for example, Mario Rabinowitz, "Examination of Wave-Particle Duality via Two-Slit Interference," Modern Physics Letters B 9 (1995), 763–789.
Suppes and Acacio de Barros, "Diffraction" (ref. 40), p. 501.
Arthur Fine, "Probability and the Interpretation of Quantum Mechanics," The British Journal for the Philosophy of Science 24 (1973), 1–37.
Popper, "Quantum Mechanics" (ref. 37); see also D.H. Mellor, The Matter of Chance (Cambridge: at the University Press, 1971), pp. 63-82.
Peter Milne, "A Note on Popper, Propensities, and the Two-Slit Experiment," Brit. J. Phil. Sci. 36 (1985), 66–70.
P. Facchi, A. Mariano, and S. Pascazio, "Mesoscopic interference," arXiv:quant-ph/0105110v1 23 May 2001, 1-16; Recent Research Development in Physics (Transworld Research Network) 3 (2002), 1–29.
L.E. Ballantine, "The Statistical Interpretation of Quantum Mechanics," Reviews of Modern Physics 42 (1970), 358–381.
Alfred Landè, "Quantum Fact and Fiction," Amer. J. Phys. 33 (1965), 123–127; idem, "Quantum Fact and Fiction. II," ibid. 34 (1966), 1160–1163.
Landè, "Quantum Fact" (ref. 51), p. 124.
O. Donati, G.F. Missiroli, and G. Pozzi, "An Experiment on Electron Interference," Amer. J. Phys. 41 (1973), 639–644, on 639; my italics.
Leslie E. Ballantine, Quantum Mechanics: A Modern Development (Singapore, New Jersey, London, Hong Kong: World Scientific, 1999), pp. 137-140, on p. 140.
Popper, "Quantum Mechanics" (ref. 37).
Merli, Missiroli, and Pozzi, "Diffrazione e interferenza" (ref. 23), pp. 93-96.
Ibid., p. 94; my italics.
G. Matteucci, "Electron wavelike behavior: A historical and experimental introduction," Amer. J. Phys. 58 (1990), 1143–1147.
I dedicate my paper to the memory of Pier Giorgio Merli, with whom I discussed an early version of it, gaining many ideas from him over a glass of wine. I am grateful to Gian Franco Missiroli and Giulio Pozzi for their long friendship and to our mutual friends who helped to establish the website on the Merli–Missiroli–Pozzi experiment. I thank Julyan Cartwright for encouraging me to revise and improve my paper. Finally, I most especially thank Roger H. Stuewer for his meticulous and knowledgeable editorial work on it. Without his extraordinary kindness, as well as his technical assistance this paper never would have been published.
This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Department of Statistical Sciences, University of Bologna, Via delle Belle Arti 41, 40126, Bologna, Italy
Rodolfo Rosa
CNR-IMM, Section of Bologna, Via Gobetti 101, 40129, Bologna, Italy
Correspondence to Rodolfo Rosa.
Rodolfo Rosa received his Ph.D. degree in physics in 1968 and in philosophy in 1977 and was a researcher at the LAMEL-CNR (Consiglio Nazionale delle Ricerche, National Research Council) Institute in Bologna, Italy. Since 1992 he is Professor in the Faculty of Statistical Sciences at the University of Bologna.
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Rosa, R. The Merli–Missiroli–Pozzi Two-Slit Electron-Interference Experiment. Phys. Perspect. 14, 178–195 (2012). https://doi.org/10.1007/s00016-011-0079-0
Issue Date: June 2012
Pier Giorgio Merli
Gian Franco Missiroli
Giulio Pozzi
Akira Tonomura
two-slit experiment
single-electron interference
single-case probability
wave-particle duality
interpretation of quantum mechanics | CommonCrawl |
\begin{document}
\begin{frontmatter} \title{Existence of strong solutions for the Oldroyd model with multivalued right-hand side\footnote{This work is part of project A8 within Collaborative Research Center 910 ``Control of self-organizing nonlinear systems: Theoretical methods and concepts of application'' that is supported by Deutsche Forschungsgemeinschaft. }} \author{Andr\'e Eikmeier} \ead{[email protected]} \address{Technische Universit{\"a}t Berlin, Institut f\"ur Mathematik, Stra{\ss}e des 17.~Juni 136, 10623 Berlin, Germany}
\begin{abstract} The initial value problem for a coupled system is studied. The system consists of a differential inclusion and a differential equation and models the fluid flow of a viscoelastic fluid of Oldroyd type. The set-valued right-hand side of the differential inclusion satisfies certain measurability, continuity and growth conditions. The local existence (and global existence for small data) of a strong solution to the coupled system is shown using a generalisation of Kakutani's fixed-point theorem and applying results from the single-valued case. \end{abstract}
\begin{keyword} Oldroyd model \sep viscoelastic fluid \sep nonlinear evolution equation \sep multivalued differential equation \sep differential inclusion \sep existence \sep Kakutani fixed-point theorem \MSC[2020]{47J35, 34G25, 35R70, 35Q35} \end{keyword}
\end{frontmatter}
\section{Introduction} \label{introduction} \subsection{Problem statement}
\noindent Let $T>0$ and let $\Omega\subset \mathbb{R}^3$ be open, bounded, and connected with $\partial\Omega\in \mathscr{C}^{2,\mu}$, $0<\mu<1$. We consider the Oldroyd model for a viscoelastic fluid in the multivalued version
\begin{equation*}
\left\{\begin{aligned}
\text{Re} \left(\partial_t u + (u\cdot\nabla) u\right) -(1-\alpha)\Delta u + \nabla p &\in \nabla\cdot \tau+F(\cdot ,u) \phantom{=02\alpha D(u)\tau_0f}\hspace{-1.5cm} \text{in }\Omega\times (0,T),\\
\nabla \cdot u &= 0 \phantom{\in 2\alpha D(u)\tau_0\nabla\cdot \tau+fF(\cdot,u)}\hspace{-1.5cm} \text{in } \Omega\times(0,T),\\
\text{We}\left( \partial_t \tau + (u\cdot\nabla)\tau + g_a(\tau,\nabla u)\right) + \tau &= 2\alpha D(u) \phantom{\in 0\tau_0\nabla\cdot \tau+fF(\cdot,u)}\hspace{-1.5cm} \text{in } \Omega\times(0,T),\\
u &=0 \phantom{\in 2\alpha D(u)\tau_0\nabla\cdot \tau+fF(\cdot,u)}\hspace{-1.5cm} \text{on } \partial\Omega\times(0,T),\\
u(\cdot,0)=u_0, \quad\quad \tau(\cdot,0)&=\tau_0 \phantom{\in 2\alpha D(u)0\nabla\cdot \tau+fF(\cdot,u)}\hspace{-1.5cm} \text{in } \Omega,
\end{aligned}\right. \end{equation*} where $u\colon \overline{\Omega}\times [0,T]\to \mathbb{R}^3$ describes the velocity of the fluid, $\tau\colon \overline{\Omega}\times [0,T]\to \mathbb{R}^{3\times 3}$ describes the stress tensor of the fluid, and $p\colon \overline{\Omega}\times [0,T]\to \mathbb{R}$ describes the pressure in the fluid. Re is the Reynolds number, describing the relation between the inertial and viscous forces in the fluid, We is the Weissenberg number, describing the influence of elasticity on the fluid flow, and the parameter $\alpha\in (0,1)$ describes the influence of Newtonian viscosity on the fluid flow. Further, $a\in [-1,1]$ is a model parameter, and $g_a$ is given by \begin{equation*}
g_a(\tau,\nabla u) = \tau W(u) - W(u)\tau - a(D(u)\tau + \tau D(u)) \end{equation*} with \begin{equation*}
D(u)=\frac{1}{2}(\nabla u + \nabla u^\top), \quad W(u)=\frac{1}{2}(\nabla u - \nabla u^\top). \end{equation*} The multivalued function $F$ fulfils certain measurability, continuity, and growth conditions that are given in detail in Section \ref{main_assumptions}. Finally, $u_0$ and $\tau_0$ are the given initial conditions.
Multivalued differential equations are used to, e.g., model feedback control problems. In this case, we can rewrite the first inclusion by introducing the single-valued right-hand side $f$ with \begin{equation*}
\begin{aligned}
\text{Re} \left(\partial_t u + (u\cdot\nabla) u\right) -(1-\alpha)\Delta u + \nabla p &= \nabla\cdot \tau+f \phantom{\in F(\cdot,u)} \text{in }\Omega\times (0,T),\\
f&\in F(\cdot,u) \phantom{= \nabla\cdot \tau+f } \text{in }\Omega\times (0,T),\\
\end{aligned} \end{equation*} so we can consider $f$ as the control acting on the velocity $u$ and $F$ as the set of admissible controls, which in the case of a feedback control problem depends on the state $u$.
As already mentioned, the differential equation problem considered in this work relies on the so-called Oldroyd model for viscoelastic fluids, i.e., fluids that do not only show the common viscous behaviour, but also elastic behaviour. This model is used, e.g., for blood flow (see, e.g., Bilgi and Atal\i k~\cite{BilgiAtalik}, Bodn\'ar, Sequeira, and Prosi~\cite{BodSeqPro}, Smith and Sequeira~\cite{SmithSequeira}) or for polymer solutions, which appear, e.g., in microfluidic devices due to an enhanced mixing and heat transfer compared to Newtonian fluids (see, e.g., Arratia et al.~\cite{ArrThoDiGo}, Lund et al.~\cite{LundBrownEtAl}, Thomases and Shelley~\cite{ThomasesShelley}), or in applications containing drop formation of the fluid such as the prilling process or ink-jet printing (see, e.g., Alsharif and Uddin~\cite{AlsharifUddin}, Davidson, Harvie, and Cooper~\cite{DavHarCoo}).
\subsection{Literature overview}
This work is a combination of Fern\'andez-Cara, Guill\'en, and Ortega~\cite{FCGO02}, where the single-valued version of the problem above was considered, and Eikmeier and Emmrich \cite{EikEmm20}, where similar methods as in this article were used to prove existence of solutions to a multivalued differential equation problem with nonlocality in time.
The Oldroyd model is one of the most popular models for viscoelastic fluids and has risen a lot of attention in the last decades. Besides Fern\'andez-Cara, Guill\'en, and Ortega~\cite{FCGO02}, local existence and uniqueness of strong solutions has been proven in Guillop\'e and Saut~\cite{GuillopeSaut}, for a Besov space setting in Chemin and Masmoudi~\cite{CheminMasmoudi}, and, for a more general model, in Renardy~\cite{Renardy90}. As the Oldroyd model incorporates the usual Newtonian model, i.e., the Navier--Stokes equation, as a special case, global existence and uniqueness of weak solutions (in the three-dimensional case and for general initial data) have not yet been proven. For the special case of the Jeffreys model corresponding to $a=0$, global existence of weak solutions for general initial data was shown in Lions and Masmoudi~\cite{LionsMasmoudi}. For a more general constitutive equation including the Oldroyd model, but with stronger assumptions on the dissipation term in the balance of momentum, global existence of weak solutions for general initial data has been proven by Bejaoui and Majdoub~\cite{BejaouiMajdoub}.
Multivalued differential equations (or differential inclusions) have been studied by several authors as well, see, e.g., Aubin and Cellina~\cite{AubinCellina}, Aubin and Frankowska~\cite{AubinFrankowska}, or Deimling~\cite{Deimling} for basic results including theory from set-valued analysis. Extensions of the results shown in Deimling~\cite{Deimling} can be found in O'Regan~\cite{ORegan}. Existence of global solutions to multivalued differential equation with a linear parabolic principal part and a relaxed one-sided Lipschitz nonlinear set-valued operator is, e.g., considered in Beyn, Emmrich, and Rieger~\cite{BeynEmmRie}.
Multivalued differential equations in the context of the Oldroyd model have only been considered, up to the knowledge of the author, in Obukhovski\u{\i}, Zecca, and Zvyagin~\cite{ObZecZvy} for the special case of the Jeffreys model. Existence of solutions is shown via the topological degree theory. Results for other viscoelastic models like the Voigt model can be found in, e.g., Gori et al.~\cite{Gori_etal} and in Zvyagin and Kuzmin~\cite{ZvyaginKuzmin}.
\subsection{Organisation of the paper}
In Section~2, we introduce the basic notation and repeat some results from set-valued analysis that we use in this work. In Section~3, we state the assumptions on the set-valued right-hand side $F$ and a preliminary result needed for the following proofs of the main results. The first of these results about the local existence is then stated and proven in Section~4. Finally, in Section~5, we state and prove the global existence of solutions for small data.
\section{Basic notation and introduction to set-valued analysis}
Given a Banach space $X$, we denote its dual by $X^*$, the norm in $X$ by $\Vert \cdot\Vert_X$, the standard norm in $X^*$ by $\Vert \cdot\Vert_{X^*}$ and the duality pairing by $\langle\cdot,\cdot\rangle$. In the case of a Hilbert space $X$, we denote the inner product by $(\cdot,\cdot)$.
Let $\Omega\subset \mathbb{R}^d$, $d\in \mathbb{N}$, be Lebesgue measurable and let $1\leq p\leq \infty$. We denote the usual Lebesgue spaces by $L^p(\Omega)$, equipped with the standard norm. In the case $p<\infty$, the dual space of $L^p(\Omega)$ is given by $L^{p'}(\Omega)$ with the conjugate $p'=p/(p-1)$ for $p>1$ and $p'=\infty$ for $p=1$. Analogously, for a real, reflexive, separable Banach space $X$ and for $T>0$, we denote the usual Bochner--Lebesgue spaces by $L^p(0,T;X)$, equipped with the standard norm. Again, in the case $p<\infty$, the dual space of $L^p(0,T;X)$ is given by $L^{p'}(0,T;X^*)$. The duality pairing in this case is given by \begin{equation*}
\langle g,f\rangle = \int_0^T \langle g(t),f(t)\rangle\diff t, \end{equation*} see, e.g., Diestel and Uhl~\cite[Theorem~1 on p.~98, Corollary~13 on p.~76, Theorem~1 on p.~79]{DiestelUhl}.
By $W^{k,p}(\Omega)$, $k\in\mathbb{N}$, we denote the usual Sobolev space of $k$-times weakly differentiable functions $u\in L^p(\Omega)$ with $D^\beta u\in L^p(\Omega)$, where $\beta \in \mathbb{N}^d$ is a multiindex of order $\vert\beta\vert\leq k$. The spaces are again equipped with the standard norm. By $W_0^{k,p}(\Omega)$, $p>1$, we denote the space of all functions $u\in W^{k,p}(\Omega)$ with $u=0$ on $\partial\Omega$, also equipped with the standard norm. Similarly, we denote by $W^{1,p}(0,T;X)$ the space of weakly differentiable functions $u\in L^p(0,T;X)$ with $u'\in L^p(0,T;X)$, equipped with the standard norm. We have the continuous embedding $W^{1,1}(0,T;X) \subset \mathscr{C}([0,T];X)$, where $\mathscr{C}([0,T];X)$ denotes the space of all functions on $[0,T]$ with values in $X$ that are continuous, see, e.g., Roub\'i\v{c}ek~\cite[Lemma 7.1]{Roubicek}. A function $u\in W^{1,1}(0,T;X)$ is even almost everywhere equal to a function $u\in \mathscr{AC}([0,T];X)$, i.e., a function on $[0,T]$ with values in $X$ that is absolutely continuous, see, e.g., Br\'ezis~\cite[Theorem 8.2]{Brezis}. The space of all functions on $[0,T]$ with values in $X$ that are continuously differentiable is denoted by $\mathscr{C}^1([0,T];X)$, and the space of all functions on $[0,T]$ that are continuous with respect to the weak topology in $X$ is denoted by $\mathscr{C}_w([0,T];X)$.
Next, we introduce a few definitions from set-valued analysis. Let $(\Omega, \Sigma)$ be a measurable space and let $X$ be a complete separable metric space. We denote the Lebesgue $\sigma$-algebra on the interval $[a,b]\subset \mathbb{R}$ by $\mathcal{L}([a,b])$ and the Borel $\sigma$-algebra on $X$ by $\mathcal{B}(X)$. Further, we denote the set of all nonempty and closed subsets $U\subset X$ by $\mathcal{P}_{f}(X)$, the set of all nonempty and convex subsets $U\subset X$ by $\mathcal{P}_{c}(X)$, and the set of all nonempty, closed, and convex subsets $U\subset X$ by $\mathcal{P}_{fc}(X)$.
Let $F\colon \Omega\to 2^X\setminus \{\emptyset\}$ be a set-valued function. We define the pointwise supremum \begin{equation*}
\vert F(\omega)\vert:=\sup \left\{\Vert x\Vert_X\mid x\in F(\omega)\right\}\!,\quad \omega\in \Omega, \end{equation*} and the graph of $F$ \begin{equation*}
\graph(F)=\left\{ (\omega,x)\in \Omega\times X \mid x\in F(\omega)\right\}\!. \end{equation*} We call a function $F\colon \Omega\to \mathcal{P}_f(X)$ measurable if the preimage of each open set is measurable, i.e., \begin{equation*}
F^{-1}(U):=\left\lbrace \omega\in \Omega\mid F(\omega)\cap U \neq \emptyset\right\rbrace \in \Sigma \end{equation*} for every open $U\subset X$. For equivalent definitions, see, e.g., Denkowski, Mig\'orski, and Papageorgiou~\cite[Theorem 4.3.4]{DenMigPapa}. For every measurable set-valued function, there exists a measurable selection, i.e., a $\Sigma$-$\mathcal{B}(X)$-measurable function $f\colon \Omega\to X$ with $f(\omega)\in F(\omega)$ for all $\omega\in \Omega$, see, e.g., Aubin and Frankowska~\cite[Theorem 8.1.3]{AubinFrankowska}.
Now, let $(\Omega, \Sigma, \mu)$ be a complete $\sigma$-finite measure space, let $X$ be a separable Banach space, let $F\colon \Omega \to \mathcal{P}_{f}(X)$ be a set-valued function, and let $p\in [1,\infty)$. By $\mathcal{F}^p$, we denote the set of all $p$-integrable selections of $F$, i.e., \begin{equation*}
\mathcal{F}^p:=\left\{ f\in L^p(\Omega;X,\mu) \mid f(\omega)\in F(\omega)\ \text{a.e. in}\ \Omega\right\}\!, \end{equation*} where $L^p(\Omega;X,\mu)$ is the space of Bochner measurable, $p$-integrable functions with respect to $\mu$.\footnote{If $X$ is a separable Banach space, the Bochner measurability of $f$ coincides with the $\Sigma$-$\mathcal{B}(X)$-measurability, see, e.g., Amann and Escher~\cite[Chapter X, Theorem 1.4]{AmannEscher}, Denkowski, Mig\'orski, and Papageorgiou~\cite[Corollary~3.10.5]{DenMigPapa}, or Papageorgiou and Winkert~\cite[Theorem 4.2.4]{PapaWin}} If there exists a nonnegative function $m\in L^p(\Omega;\mathbb{R},\mu)$ such that $F(\omega)\subset m(\omega)\;\! B_X$ for $\mu$-almost all $\omega\in \Omega$, where $B_X$ denotes the open unit ball in $X$, we call $F$ integrably bounded. In this case, Lebesgue's theorem of dominated convergence implies that every measurable selection of $F$ is an element of $\mathcal{F}^p$. The integral of a set-valued function $F$ is defined by \begin{equation*}
\int_\Omega F \diff \mu := \left\{ \int_\Omega f \diff \mu \mid f\in \mathcal{F}^1\right\}\!. \end{equation*} Important properties of this integral can be found, e.g., in Aubin and Frankowska~\cite[Chapter~8.6]{AubinFrankowska}.
Now, let $F$ have an additional argument, i.e., $F\colon \Omega \times X \to \mathcal{P}_f(X)$, and let $v\colon \Omega \to X$. By $\mathcal{F}^p(v)$, we denote the set of all $p$-integrable selections of the mapping $\omega \mapsto F(\omega, v(\omega))$, i.e., \begin{equation*}
\mathcal{F}^p(v):=\left\{ f\in L^p(\Omega;X,\mu) \mid f(\omega)\in F(\omega, v(\omega))\ \text{a.e. in}\ \Omega\right\}\!. \end{equation*}
Finally, by $c$, we denote a generic positive constant.
\section{Main assumptions and preliminary results} \label{main_assumptions}
For the rest of this work, let $\Omega\subset \mathbb{R}^3$ be open, bounded, and connected with $\partial\Omega\in \mathscr{C}^{2,\mu}$, $0<\mu<1$. In order to formulate the problem, we first introduce certain function spaces for better readability. Since this work refers to Fern\'andez-Cara, Guill\'en, and Ortega \cite{FCGO02} for the single-valued case, we will mostly use the same notation. Let $1<r,s<\infty$. We define \begin{equation*}
\begin{aligned}
H_r&= \left\{ u\in L^r(\Omega)^3 \mid \nabla\cdot u=0,\ u\cdot n= 0 \text{ on } \partial\Omega\right\}\!,\\
V_r&= H_r\cap W_0^{1,r}(\Omega)^3=\left\{ u\in W_0^{1,r}(\Omega)^3 \mid \nabla\cdot u=0\right\}\!,
\end{aligned} \end{equation*} where the divergence in the definition of $H_r$ is meant in the distributional sense and where $u\cdot n$ is meant in the sense of traces with $n$ denoting the outer normal unit vector to $\partial\Omega$. Further, by $P_r$, we denote the usual Helmholtz (or Helmholtz-Leray, Helmholtz-Weyl) projector $P_r\colon L^r(\Omega)^3\to H_r$, i.e., $P_r$ is linear and bounded with $P_r u=v$ where $v$ is given by the so-called Helmholtz decomposition \begin{equation} \label{Helmholtz_decomp}
u=v + \nabla w \end{equation} with $v\in H_r$ and $w\in W^{1,r}(\Omega)$, see, e.g., Galdi~\cite[Chapter III.1]{Galdi}. Based on this, by $A_r$, we denote the Stokes operator $A_r\colon D(A_r)\to H_r$ with the domain $D(A_r)=V_r\cap W^{2,r}(\Omega)^3$ and $A_r u=P_r(-\Delta u)$ for all $u\in D(A_r)$. Equipped with the norm \begin{equation*}
\Vert u \Vert_{D(A_r)}= \Vert u \Vert_{H_r} + \Vert A_r u \Vert_{H_r}, \end{equation*} $D(A_r)$ is a Banach space, see, e.g., Butzer and Berens~\cite[p.~11]{ButzerBerens}. We also introduce the space \begin{equation*}
D_r^s=\left\{ u\in H_r\ \bigg|\ \int_0^\infty \Vert{A_r e^{-tA_r}u}\Vert^s_{H_r}\diff t<\infty\right\}\!, \end{equation*} which is, equipped with the norm \begin{equation*}
\Vert u \Vert_{D_r^s} = \Vert u \Vert_{H_r} + \left(\int_0^\infty \Vert A_r \,e^{-tA_r}\,u \Vert_{H_r}^s \diff t\right)^{1/s}, \end{equation*} again a Banach space, coinciding with a real interpolation space between $D(A_r)$ and $H_r$ and with the continuous and dense embeddings \begin{equation} \label{embeddings_D_r^s}
D(A_r)\subset D^s_r\subset H_r, \end{equation} see, e.g., Butzer and Berens~\cite[Chapter~III]{ButzerBerens}. As mentioned in Fern\'andez-Cara, Guill\'en, and Ortega \cite[p.~563]{FCGO02}, this space is a natural choice for the initial data $u_0$ of our differential inclusion problem if we are looking for a solution in $L^s(0,T;D(A_r))$, see also Giga and Sohr~\cite[pp.~77~f.]{GigaSohr}.
For simplicity, we still write $\partial_t u$, $\nabla u$, or $\nabla\cdot u$ for abstract functions $u\colon [0,T]\to X$, where $X$ is a Banach space of functions mapping $\Omega$ to $\mathbb{R}$ (or $\mathbb{R}^3$, $\mathbb{R}^{3\times 3}$, respectively). Also, if there is no risk of confusion, we simply write, e.g., $L^r$ and $W^{1,r}$ for the spaces $L^r(\Omega)^3$ and $W^{1,r}(\Omega)^{3\times 3}$ and, e.g., $\Vert\cdot\Vert_{L^s(L^r)}$ for the norm of the space $L^s(0,T;L^r)$.
Now, let us state the assumptions on the set-valued right-hand side $F\colon [0,T]\times H_r \to \mathcal{P}_{fc}(L^r)$. We say that the assumptions {\textbf{(F)}} are fulfilled if \begin{itemize}
\item[\textbf{(F1)}] $F$ is measurable,
\item[\textbf{(F2)}] for almost all $t\in(0,T)$, the graph of the mapping $v\mapsto F(t,v)$ is sequentially closed in $H_r\times L^r_w$, where $L^r_w$ denotes the Hilbert space $L^r$ equipped with the weak topology, and
\item[\textbf{(F3)}] for almost all $t\in(0,T)$ and all $v\in H_r$, we have the estimate $$\vert{F(t,v)}\vert \leq b(t)\left(1 + \gamma\left(\Vert{v}\Vert_{H_r}\right)\right)$$ with $b\in L^s(0,T)$, $b\geq 0$ a.e., and $\gamma\colon[0,\infty)\to [0,\infty)$ a monotonically increasing function. \end{itemize}
An example for $F$ with $\gamma(z)=c\;\!z^{2/s'}$, $c>0$, can be found in Eikmeier and Emmrich~\cite[Section~5]{EikEmm20}.
In order to prove our main result, we need the following \begin{lemma} \label{MeasurabilityNemytskii}
Let $X$ and $Y$ be separable Banach spaces. If the set-valued mapping $G\colon [0,T]\times X\to P_f(Y)$ is measurable and the mapping $v\colon [0,T]\to X$ is Bochner measurable, then the set-valued Nemytsky mapping $\tilde{G}_v\colon [0,T]\to P_f(Y)$, $t\mapsto G(t,v(t))$, is measurable. \end{lemma}
The proof to this lemma with $X=Y$ is given in Eikmeier and Emmrich~\cite[Lemma~2]{EikEmm20}, but it can easily be adapted to the case $X\neq Y$.
\section{Local existence}
We are now able to state our main result about the local existence of strong solutions.
\begin{theorem} \label{thm_local}
Let $\Omega\subset\mathbb{R}^3$ be open, bounded, and connected with $\partial\Omega\in \mathscr{C}^{2,\mu}$, $0<\mu<1$, let $T>0$ and let $u_0\in D^s_r$, $\tau_0\in W^{1,r}$ with $3<r<\infty$, $1<s<\infty$. Let $F\colon [0,T]\times H_r \to \mathcal{P}_{fc}(L^r)$ satisfy the assumptions \textbf{(F)}. Then there exists $T_*>0$ and
\begin{equation*}
\begin{aligned}
u&\in L^s(0,T_*; D(A_r)) \quad\text{with}\quad \partial_t u\in L^s(0,T_*; H_r),\\
\tau &\in \mathscr C([0,T_*];W^{1,r})\quad\text{with}\quad \partial_t \tau\in L^s(0,T_*; L^r),\\
p&\in L^s(0,T_*; W^{1,r}),\\
\end{aligned}
\end{equation*}
such that $(u,\tau,p)$ is a solution to
\begin{equation} \label{problem_multivalued}
\left\{\begin{aligned}
\mathrm{Re} \left(\partial_t u + (u\cdot\nabla) u\right) -(1-\alpha)\Delta u - \nabla\cdot \tau+ \nabla p &\in F(\cdot ,u) \phantom{=02\alpha D(u)\tau_0f}\hspace{-1.5cm} \text{in } (0,T_*),\\
\mathrm{We}\left( \partial_t \tau + (u\cdot\nabla)\tau + g_a(\tau,\nabla u)\right) + \tau &= 2\alpha D(u) \phantom{\in 0\tau_0fF(\cdot,u)}\hspace{-1.5cm} \text{in } (0,T_*),\\
u(0)=u_0, \quad\quad \tau(0)&=\tau_0,
\end{aligned}\right.
\end{equation}
i.e., a solution to the the single-valued problem
\begin{equation} \label{abstract_problem_single-valued}
\left\{\begin{aligned}
\mathrm{Re} \left(\partial_t u + (u\cdot\nabla) u\right) -(1-\alpha)\Delta u -\nabla\cdot \tau+ \nabla p &= f \phantom{02\alpha D(u)\tau_0}\hspace{-.3cm} \text{in } (0,T_*),\\
\mathrm{We}\left( \partial_t \tau + (u\cdot\nabla)\tau + g_a(\tau,\nabla u)\right) + \tau &= 2\alpha D(u) \phantom{0\tau_0f}\hspace{-.3cm} \text{in } (0,T_*),\\
u(0)=u_0, \quad\quad \tau(0)&=\tau_0,
\end{aligned}\right.
\end{equation}
where $f\in L^s(0,T_*; L^r)$ with $f(t)\in F(t,u(t))$ a.e. in $(0,T_*)$. \end{theorem}
\begin{proof}
For $\tilde{T}>0$, we introduce the spaces
\begin{equation*}
\begin{aligned}
\mathcal{U}(\tilde{T})&:= \left\{ u\in L^s(0,\tilde{T};D(A_r))\mid \partial_t u\in L^s(0,\tilde{T};H_r)\right\},\\
\mathcal{T}(\tilde{T})&:= \left\{ \tau\in L^\infty(0,\tilde{T};W^{1,r})\mid \partial_t \tau\in L^s(0,\tilde{T};L^r)\right\},
\end{aligned}
\end{equation*}
and
\begin{equation*}
\mathcal{W}(\tilde{T}):= \mathcal{U}(\tilde{T}) \times \mathcal{T}(\tilde{T})
\end{equation*}
with the norm
\begin{equation*}
\Vert{(u,\tau)}\Vert_{\mathcal{W}(\tilde{T})}:= \Vert{u}\Vert_{L^s(D(A_r))}+\Vert{\partial_t u}\Vert_{L^s(H_r)} + \Vert{\tau}\Vert_{L^\infty(W^{1,r})} + \Vert{\partial_t \tau}\Vert_{L^s(L^r)}.
\end{equation*}
For $R_i>0$, $i=1,2,3$, let
\begin{equation} \label{Y(T)}
\begin{aligned}
\mathcal{Y}(\tilde{T}):= \{ (u,\tau)\in \mathcal{W}(\tilde{T}) \mid\ &u(0)=u_0, \quad \tau(0)=\tau_0,\\
& \!\Vert{u}\Vert_{L^s(D(A_r))}^s+ \Vert{\partial_t u}\Vert^s_{L^s(H_r)}\leq R_1^s,\\
&\!\left.\! \Vert{\tau}\Vert_{L^\infty(W^{1,r})} \leq R_2, \quad \Vert{\partial_t \tau}\Vert_{L^s(L^r)} \leq R_3 \right\}.
\end{aligned}
\end{equation}
There exists $c_1>0$, depending on Re, $r$, $s$, and $\Omega$, such that for
\begin{equation} \label{estimate_R_1_R_2}
R_1\geq \frac{c_1}{1-\alpha} \Vert u_0\Vert_{D^s_r}, \quad R_2\geq \Vert \tau_0\Vert_{W^{1,r}},
\end{equation}
the set $\mathcal{Y}(\tilde{T})$ is nonempty for all $\tilde{T}>0$, see the proof for the single-valued case in Fern\'andez-Cara, Guill\'en, and Ortega~\cite[p.~570]{FCGO02}. Let also
\begin{equation*}
\mathcal{X}(\tilde{T}):= L^s(0,\tilde{T}; V_r)\times \mathscr{C}([0,\tilde{T}];L^r).
\end{equation*}
Now, for $0<\tilde{T}\leq T$, let $$\Phi\colon \mathcal{Y}(\tilde{T})\to \mathcal{P}_c(\mathcal{X}(\tilde{T}))$$ with $$(u,\tau)\in \Phi(\tilde{u}, \tilde{\tau}), \quad (\tilde{u}, \tilde{\tau})\in \mathcal{Y}(\tilde{T}),$$ iff $u\in \mathcal{U}(\tilde{T})$ is a solution to
\begin{equation}\label{linearised_equation_u}
\left\{\begin{aligned}
\text{Re}\;\! \partial_t u +(1-\alpha)\;\!A_r u &\in P_r\left(-\text{Re}\;\! (\tilde{u}\cdot\nabla)\tilde{u} + \nabla\cdot \tilde{\tau} + F(\cdot,\tilde{u})\right) \phantom{=0u_0}\hspace{-.5cm}\text{in } (0,\tilde{T}),\\
u(0)&=u_0,
\end{aligned}\right.
\end{equation}
and $\tau\in \mathcal{T}(\tilde{T})$ is a solution to
\begin{equation}\label{linearised_equation_tau}
\left\{\begin{aligned}
\text{We}\left( \partial_t \tau + (\tilde{u}\cdot\nabla)\tau + g_a(\tau,\nabla\tilde{u})\right) + \tau &= 2\alpha D(\tilde{u}) \phantom{\tau_0} \text{in } (0,\tilde{T}) ,\\
\tau(0)&=\tau_0 .
\end{aligned}\right.
\end{equation}
Since $\tilde{u}$ and $\tilde{\tau}$ are fixed, the two systems~\eqref{linearised_equation_u} and~\eqref{linearised_equation_tau} are linear in $u$ and $\tau$, respectively. First, we show that a fixed point of $\Phi$, i.e., a pair $(u,\tau)\in \mathcal{Y}(\tilde{T})$ with $(u,\tau)\in \Phi(u,\tau)$, implies the existence of a solution $(u,\tau,p)$ to \eqref{problem_multivalued}: Let $(u,\tau)\in \mathcal{Y}(\tilde{T})$ be a fixed point of $\Phi$. Then, there exists $f\in \mathcal{F}^s(u)$ such that
\begin{equation*}
\text{Re}\;\! \partial_t u +(1-\alpha)\;\!A_r u = P_r\left(-\text{Re}\;\! (u\cdot\nabla)u + \nabla\cdot \tau + f\right) \phantom{=0u_0}\hspace{-.5cm}\text{in } (0,\tilde{T}).
\end{equation*}
Since $\partial_t u\in L^s(0,T;H_r)$, we have $P_r(\partial_t u)= \partial_t u$. Together with the definition of the Stokes operator $A_r$, we obtain
\begin{equation*}
P_r\left(\text{Re}\;\! \partial_t u - (1-\alpha) \Delta u + \text{Re}\;\! (u\cdot\nabla)u - \nabla\cdot \tau - f\right)=0 \phantom{=0u_0}\hspace{-.5cm}\text{in } (0,\tilde{T}).
\end{equation*}
Now, the Helmholtz decomposition, see~\eqref{Helmholtz_decomp}, implies that, for almost all $t\in (0,\tilde{T})$, there exists $p(t)\in W^{1,r}$ such that
\begin{equation*}
\text{Re}\;\! \partial_t u - (1-\alpha) \Delta u + \text{Re}\;\! (u\cdot\nabla)u - \nabla\cdot \tau - f = \nabla p\phantom{=0u_0}\hspace{-.5cm}\text{in } (0,\tilde{T}).
\end{equation*}
Due to the regularity of $u$, $\tau$, and $f$, we obtain $p\in L^s(0,\tilde{T};W^{1,r})$. Finally, it is easy to see that $\tau$ solves the second equation in~\eqref{problem_multivalued}, so $(u,\tau,p)$ is a solution to~\eqref{problem_multivalued}.
In order to show the existence of a fixed point, we want to apply the generalisation of Kakutani's fixed-point theorem (see Glicksberg~\cite{Glicksberg} and Fan~\cite{Fan}), i.e., we have to show that there exists $T_*$ with $0<T_*\leq T$ such that $\mathcal{Y}(T_*)$ is nonempty, convex, and compact in $\mathcal{X}(T_*)$, that $\Phi$ maps $\mathcal{Y}(T_*)$ into convex subsets of $\mathcal{Y}(T_*)$, and that $\Phi$ is closed, i.e., its graph is closed in $\mathcal{X}(T_*)\times \mathcal{X}(T_*)$.
First, we show that $\Phi$ is well-defined, i.e., that for all $(\tilde{u},\tilde{\tau})\in \mathcal{Y}(\tilde{T})$, the set $\Phi(\tilde{u},\tilde{\tau})$ is nonempty and convex in $\mathcal{X}(\tilde{T})$. Due to $\tilde{u}\in \mathcal{U}(\tilde{T})$, we have $\tilde{u}\in \mathscr{AC}([0,\tilde{T}];H_r)$, i.e., $\tilde{u}$ is Bochner measurable and the mapping $t\mapsto \Vert \tilde{u}(t)\Vert_{H_r}$ is bounded. Following Lemma \ref{MeasurabilityNemytskii}, the mapping $t\mapsto F(t, \tilde{u}(t))$ is measurable and there exists a measurable selection $f$. Due to Assumption \textbf{(F3)}, we have
\begin{equation*}
\Vert f(t)\Vert_{L^r}\leq b(t)\left( 1 + \gamma\left(\Vert \tilde{u}(t)\Vert_{H_r}\right)\right)\leq b(t)\left( 1+\gamma(c)\right)
\end{equation*}
for some $c>0$ and for almost all $t\in (0,\tilde{T})$, implying $f\in L^s(0,\tilde{T}; L^r)$ (due to $b\in L^s(0,T)$) and thus $f\in \mathcal{F}^s(\tilde{u})$. Now, we can apply Fern\'andez-Cara, Guill\'en, and Ortega~\cite[Lemma~10.1]{FCGO02} to obtain a (unique) solution $u\in \mathcal{U}(\tilde{T})$ to the single-valued problem
\begin{equation} \label{linearised_equation_u_short}
\left\{\begin{aligned}
\text{Re}\;\! \partial_t u +(1-\alpha)\;\!A_r u &=P_r\left(-\text{Re}\;\! (\tilde{u}\cdot\nabla)\tilde{u} + \nabla\cdot \tilde{\tau} + f\right) \phantom{=0u_0}\hspace{-.5cm}\text{in } (0,\tilde{T}),\\
u(0)&=u_0.
\end{aligned}\right.
\end{equation}
Also, following Fern\'andez-Cara, Guill\'en, and Ortega~\cite[Lemma~10.3]{FCGO02}, there exists a (unique) solution $\tau\in \mathcal{T}(\tilde{T})$ to \eqref{linearised_equation_tau}. Therefore, $\Phi(\tilde{u},\tilde{\tau})$ is nonempty. In order to show the convexity of $\Phi(\tilde{u},\tilde{\tau})$, let $(\tilde{u}, \tilde{\tau})\in \mathcal{Y}(\tilde{T})$, $\lambda\in(0,1)$, and $(u_1, \tau_1),(u_2,\tau_2)\in \Phi(\tilde{u}, \tilde{\tau})$. Therefore, there exist $f_1,f_2\in \mathcal{F}^s(\tilde{u})$ such that $u_i$ is a solution to the single-valued problem~\eqref{linearised_equation_u_short} with $f_i$ instead of $f$ on the right-hand side, $i=1,2$. Now, the linearity of problem~\eqref{linearised_equation_u_short} in $u$ (remember that $\tilde{u}$ is fixed) implies that $\lambda u_1+ (1-\lambda)u_2$ is a solution to problem~\eqref{linearised_equation_u_short} with $\lambda f_1 +(1-\lambda)f_2$ on the right-hand side. Since $F$ is convex-valued, it is easy to show that the set $\mathcal{F}^s(\tilde{u})$ is convex as well. Thus, we have $\lambda f_1 +(1-\lambda)f_2\in \mathcal{F}^s(\tilde{u})$. As a consequence, $\lambda u_1+ (1-\lambda)u_2$ is a solution to problem~\eqref{linearised_equation_u}. Also, due to the linearity of problem~\eqref{linearised_equation_tau} in $\tau$, $\lambda\tau_1+(1-\lambda)\tau_2$ is a solution to this problem. Overall, this implies $\lambda (u_1,\tau_1)+(1-\lambda)(u_2,\tau_2)\in \Phi(\tilde{u}, \tilde{\tau})$, i.e., $\Phi$ is convex-valued. Therefore, $\Phi$ is well-defined.
For all $\tilde{T}>0$, the set $\mathcal{Y}(\tilde{T})$ is a nonempty, convex, and compact subset of $\mathcal{X}(\tilde{T})$, see the proof for the single-valued case in Fern\'andez-Cara, Guill\'en, and Ortega~\cite[Theorem~9.1]{FCGO02}. Next, we have to show that there exist $T_*,R_1, R_2,R_3>0$ such that
\begin{equation*}
\Phi\colon \mathcal{Y}(T_*)\to \mathcal{P}_c(\mathcal{Y}(T_*)).
\end{equation*}
We approach similarly to the proof for the single-valued case, but we have to include the estimates on our set-valued operator $F$. Let $(u,\tau)\in \Phi(\tilde{u}, \tilde{\tau})$ for an arbitrary $(\tilde{u}, \tilde{\tau})\in \mathcal{Y}(\tilde{T})$ and let $f\in \mathcal{F}^s(\tilde{u})$ such that $u$ solves the single-valued problem~\eqref{linearised_equation_u_short} with $f$. Following Fern\'andez-Cara, Guill\'en, and Ortega~\cite[p.~571]{FCGO02}, an a priori estimate for the solution to the single-valued problem~\eqref{linearised_equation_u_short} (cf. Fern\'andez-Cara, Guill\'en, and Ortega~\cite[Lemma~10.1]{FCGO02}) yields
\begin{equation} \label{a_priori_u_prelim}
\begin{aligned}
\Vert u \Vert_{L^s(D(A_r))}^s + \Vert \partial_t u \Vert_{L^s(H_r)}^s &\leq \left(\frac{c_1}{1-\alpha}\right)^s\left( \Vert u_0 \Vert_{D^s_r}^s + c_2 \left( \Vert u_0 \Vert_{H_r}^{3s(r-1)/(2r)} R_1^{s(r+3)/(2r)}\tilde{T}^{(r-3)/(2r)} \right.\right. \\
&\hspace{3cm} \left.\left. + R_1^{2s} \tilde{T}^{3s(r-1)/(2r)-1} + R_2^s\tilde{T} +\Vert f \Vert_{L^s(L^r)}^s \right)\right)
\end{aligned}
\end{equation}
with $c_2>0$ depending on Re, $r$, $s$, and $\Omega$. Using Assumption \textbf{(F3)}, we obtain
\begin{equation} \label{estimate_f}
\begin{aligned}
\Vert f \Vert_{L^s(L^r)}^s &= \int_0^{\tilde{T}} \Vert f(t)\Vert_{L^r}^s\diff t\\
&\leq \int_0^{\tilde{T}} b(t)^s\left( 1 + \gamma\!\left(\Vert \tilde{u}(t) \Vert_{H_r}\right) \right)^s \diff t.\\
\end{aligned}
\end{equation}
As mentioned before, we have $u\in \mathscr{AC}(0,\tilde{T};H_r)$ and in particular $u\in L^\infty(0,\tilde{T};H_r)$. Hölder's inequality yields
\begin{equation} \label{estimate_L_infty_H_r}
\Vert \tilde{u} \Vert_{L^\infty(H_r)} \leq \Vert u_0 \Vert_{H_r} + \Vert \partial_t \tilde{u} \Vert_{L^1(H_r)}\leq \Vert u_0 \Vert_{H_r} + \tilde{T}^{1/s'} \Vert \partial_t \tilde{u} \Vert_{L^s(H_r)}.
\end{equation}
Together with \eqref{estimate_f} and with the monotonicity of $\gamma$, we have
\begin{equation*}
\begin{aligned}
\Vert f \Vert_{L^s(L^r)}^s &\leq \Vert b \Vert_{L^s(0,T)}^s \left( 1+\gamma\! \left( \Vert u_0 \Vert_{H_r} + \tilde{T}^{1/s'} \Vert \partial_t \tilde{u} \Vert_{L^s(H_r)} \right)\right)^s ,
\end{aligned}
\end{equation*}
so $\tilde{u}\in \mathcal{Y}(\tilde{T})$ implies
\begin{equation} \label{estimate_f_2}
\begin{aligned}
\Vert f \Vert_{L^s(L^r)}^s &\leq \Vert b \Vert_{L^s(0,T)}^s \left( 1 + \gamma\! \left( \Vert u_0 \Vert_{H_r} + R_1 \tilde{T}^{1/s'} \right)\right)^s.
\end{aligned}
\end{equation}
Combined with \eqref{a_priori_u_prelim}, we end up with
\begin{equation} \label{a_priori_u}
\begin{aligned}
\Vert u \Vert_{L^s(D(A_r))}^s + \Vert \partial_t u \Vert_{L^s(H_r)}^s &\leq \left(\frac{c_1}{1-\alpha}\right)^s\left( \Vert u_0 \Vert_{D^s_r}^s + c_2 \left( \Vert u_0 \Vert_{H_r}^{3s(r-1)/(2r)} R_1^{s(r+3)/(2r)}\tilde{T}^{(r-3)/(2r)} \right.\right. \\
&\hspace{2cm} + R_1^{2s} \tilde{T}^{3s(r-1)/(2r)-1} + R_2^s\tilde{T} \\
&\hspace{2.7cm} \left.\left.+ \Vert b \Vert_{L^s(0,T)}^s \left( 1 + \gamma\! \left( \Vert u_0 \Vert_{H_r} + R_1 \tilde{T}^{1/s'} \right)\right)^s \right)\right).
\end{aligned}
\end{equation}
The estimates for $\tau$ are obtained by the a priori estimates for the solution to problem~\eqref{linearised_equation_tau} (cf. Fern\'andez-Cara, Guill\'en, and Ortega~\cite[Lemma~10.3]{FCGO02}), so we have
\begin{equation} \label{a_priori_tau}
\Vert \tau \Vert_{L^\infty(W^{1,r})}\leq \left( \Vert \tau_0 \Vert_{W^{1,r}} + \frac{4\alpha}{c_3\mathrm{We}}\right)\exp\left(c_3\;\!R_1\tilde{T}^{1/s'}\right) =: \Lambda
\end{equation}
and
\begin{equation} \label{a_priori_dt_tau}
\Vert \partial_t \tau \Vert_{L^s(L^r)}\leq c_4 \;\!\Lambda \left(R_1+ \frac{\tilde{T}^{1/s}}{c_3\mathrm{We}}\right)
\end{equation}
with $c_3,c_4>0$ depending on $a$, $r$, $s$, and $\Omega$. Due to the last three estimates, we can choose $T_*$ small enough and $R_1$, $R_2$, and $R_3$ large enough, respectively, such that $(\tilde{u},\tilde{\tau})\in \mathcal{Y}(T_*)$ implies $(u,\tau)\in \mathcal{Y}(T_*)$ for all $(u,\tau)\in \Phi(\tilde{u},\tilde{\tau})$, i.e.,
\begin{equation*}
\Phi(\mathcal{Y}(T_*))\subset \mathcal{Y}(T_*).
\end{equation*}
A possible choice for $T_*$, $R_1$, $R_2$, and $R_3$ is, e.g.,
\begin{equation*}
\begin{aligned}
R_1^s &= \left(\frac{c_1}{1-\alpha}\right)^s \left(\Vert u_0 \Vert_{D^s_r}^s + c_2 \left( \Vert u_0 \Vert_{H_r}^{3s(r-1)/(2r)} +2 +\Vert b \Vert_{L^s(0,T)}^s \left( 1+ \gamma\! \left( \Vert u_0 \Vert_{H_r} +1\right)\right)^s\right)\right),\\
R_2 &= \left( \Vert \tau_0 \Vert_{W^{1,r}} + \frac{4\alpha}{c_3\mathrm{We}}\right) \exp c_3, \\
R_3 &= c_4\;\!R_2 \left(R_1+ \frac{1}{c_3\mathrm{We}\;\!R_2}\right),\\
T_* &= \min\left( R_1^{-s(r+3)/(r-3)},\ R_1^{-4sr/(3s(r-1)-2r)},\ R_2^{-s},\ R_1^{-s'},\ T \right).
\end{aligned}
\end{equation*}
In particular, $R_1$ and $R_2$ fulfil the necessary estimates~\eqref{estimate_R_1_R_2}.
Finally, we have to show that $\Phi$ is closed, i.e., its graph is closed in $\mathcal{X}(T_*)\times \mathcal{X}(T_*)$. Let $(\tilde{u}_n, \tilde{\tau}_n,u_n,\tau_n)\subset \text{graph}\;\! \Phi$ with $(\tilde{u}_n, \tilde{\tau}_n,u_n,\tau_n)\to (\tilde{u},\tilde{\tau}, u,\tau)$ in $\mathcal{X}(T_*)\times \mathcal{X}(T_*)$, so in particular
\begin{equation} \label{strong_convergences}
\begin{aligned}
\tilde{u}_n\to \tilde{u}, ~~ u_n\to u \quad & \text{in } L^s(0,T_*;V_r),\\
\tilde{\tau}_n\to \tilde{\tau}, ~~\tau_n\to\tau \quad &\text{in } \mathscr{C}([0,T_*];L^r).
\end{aligned}
\end{equation}
We have to show $(\tilde{u},\tilde{\tau}, u,\tau)\in \text{graph}\;\!\Phi$, i.e., $(u,\tau)\in \Phi(\tilde{u}, \tilde{\tau})$. Due to $\Phi(\mathcal{Y}(T_*))\subset \mathcal{Y}(T_*)$, we know $(\tilde{u}_n, \tilde{\tau}_n,u_n,\tau_n)\in \mathcal{Y}(T_*)\times \mathcal{Y}(T_*)$. Since the spaces $L^s(0,T_*;D(A_r))$, $L^s(0,T_*;H_r)$, and $L^s(0,T_*;L^r)$ are reflexive Banach spaces and the space $L^\infty(0,T_*;W^{1,r})$ is the dual space of a separable normed space, the boundedness of $\mathcal{Y}(T_*)$ in $\mathcal{W}(T_*)$ implies that there exist subsequences (again denoted by $n$) and $v\in L^s(0,T_*;D(A_r))$, $w\in L^s(0,T_*;H_r)$, $\eta\in L^\infty(0,T_*;W^{1,r})$, and $\theta\in L^s(0,T_*;L^r)$ such that
\begin{equation} \label{weak_convergences}
\begin{aligned}
u_n \rightharpoonup v\quad &\text{in } L^s(0,T_*;D(A_r)),\\
\partial_t u_n\rightharpoonup w\quad & \text{in } L^s(0,T_*;H_r), \\
\tau_n \mathrel{\vbox{\offinterlineskip\ialign{\hfil##\hfil\cr $\hspace{-0.4ex}\textnormal{\scriptsize{*}}$\cr \noalign{\kern-0.6ex} $\rightharpoonup$\cr}}} \eta \quad &\text{in } L^\infty(0,T_*;W^{1,r}), \\
\partial_t \tau_n \rightharpoonup \theta \quad &\text{in } L^s(0,T_*;L^r),
\end{aligned}
\end{equation}
and the same for $\tilde{u}_n$ (with $\tilde{v}\in L^s(0,T_*;D(A_r))$) etc. It is easy to see that $w=\partial_t v$ and $\theta=\partial_t \eta$. Due to the uniqueness of the weak limit, it is also easy to see that $\tilde{v}=\tilde{u}$, $\tilde{\eta}=\tilde{\tau}$, $v=u$, and $\eta=\tau$.
Since $(\tilde{u}_n, \tilde{\tau}_n,u_n,\tau_n)\in \text{graph}\;\! \Phi$, $n\in \mathbb{N}$, there exists $f_n\in \mathcal{F}^s(\tilde{u}_n)$ such that
\begin{equation} \label{problem_n}
\left\{\begin{aligned}
\text{Re}\;\! \partial_t u_n + (1-\alpha)\;\!A_r u_n &= P_r(-\text{Re}\;\! (\tilde{u}_n\cdot\nabla)\tilde{u}_n + \nabla\cdot \tilde{\tau}_n + f_n) \phantom{2 \alpha D(\tilde{u}_n)}\hspace{-.7cm}\text{in } (0,T_*),\\
u_n(0)&=u_0, \\
\text{We}\left( \partial_t \tau_n + (\tilde{u}_n\cdot\nabla)\tau_n + g_a(\tau_n,\nabla\tilde{u}_n)\right) + \tau_n &= 2\alpha D(\tilde{u}_n) \phantom{ P_r(-\text{Re}\;\! (\tilde{u}_n\cdot\nabla)\tilde{u}_n + \nabla\cdot \tilde{\tau}_n + f_n) } \hspace{-.7cm} \text{in } (0,T_*) ,\\
\tau_n(0)&=\tau_0 .
\end{aligned}\right.
\end{equation}
As derived in \eqref{estimate_f_2}, $f_n\in \mathcal{F}^s(\tilde{u}_n)$ and Assumption \textbf{(F3)} yield
\begin{equation*}
\Vert f_n \Vert_{L^s(L^r)}^s \leq \Vert b \Vert_{L^s(0,T_*)}^s \left( 1+ \gamma\! \left( \Vert u_0 \Vert_{H_r} + R_1 T_*^{1/s'} \right)\right)^s,
\end{equation*}
so the sequence $(f_n)\subset L^s(0,T_*;L^r)$ is bounded. Again, due to the reflexivity of $L^s(0,T_*;L^r)$, there exist a subsequence of the subsequence (again denoted by $n$) and $f\in L^s(0,T_*;L^r)$ such that
\begin{equation*}
f_n\rightharpoonup f \quad \text{in } L^s(0,T_*;L^r).
\end{equation*}
Similarly to \eqref{estimate_f_2}, we can also derive the pointwise estimate
\begin{equation*}
\Vert f_n(t)\Vert_{L^r} \leq b(t) \left( 1+ \gamma\!\left(\Vert u_0\Vert_{H_r}+ R_1 T_*^{1/s'}\right)\right) =:\tilde{b}(t)
\end{equation*}
for almost all $t\in(0,T_*)$. Therefore, we have $f_n(t)\in \bar{B}_{L^r}(0;\tilde{b}(t))$ for almost all $t\in(0,T_*)$, where $\bar{B}_{L^r}(0;\tilde{b}(t))$ denotes the closed ball in $L^r$ around $0$ of radius $\tilde{b}(t)$. This implies
\begin{equation*}
f(t)\in \overline{\text{co}}\left( \mathop{}\!\mathrm{w\!-\!\overline{lim}}\;\!\{f_n(t)\}\right)
\end{equation*}
for almost all $t\in (0,T_*)$, where $\mathop{}\!\mathrm{w\!-\!\overline{lim}}\;\!$ denotes the weak Kuratowski limit superior of a sequence of sets, i.e., for a sequence $(M_n)\subset 2^X$ of subsets of a Banach space $X$, we have
\begin{equation*}
\mathop{}\!\mathrm{w\!-\!\overline{lim}}\;\! M_n = \{x\in X\mid \exists (x_k) \subset X,~ x_k\in M_{n_k},~ k\in\mathbb{N},~ x_k\rightharpoonup x \text{ in } X\},
\end{equation*}
see, e.g., Papageorgiou~\cite[Theorem~3.1]{Papageorgiou87}. As we have $f_n(t)\in F(t,\tilde{u}_n(t))$ for almost all $t\in (0,T_*)$ and all $n\in\mathbb{N}$, we obtain
\begin{equation} \label{f_in_co}
f(t)\in \overline{\text{co}}\left( \mathop{}\!\mathrm{w\!-\!\overline{lim}}\;\! F(t,\tilde{u}_n(t))\right)
\end{equation}
for almost all $t\in (0,T_*)$. Since $(\tilde{u}_n,\tilde{\tau}_n)\to (\tilde{u}, \tilde{\tau})$ in $\mathcal{X}(T_*)$ and in particular $\tilde{u}_n\to \tilde{u}$ in $L^s(0,T_*;V_r)$, we have, up to a subsequence, $\tilde{u}_n(t)\to \tilde{u}(t)$ in $V_r$ for almost all $t\in (0,T_*)$, see, e.g., Brezis~\cite[Theorem~4.9]{Brezis}. Now, Assumption~\textbf{(F2)} implies
\begin{equation*}
\mathop{}\!\mathrm{w\!-\!\overline{lim}}\;\! F(t,\tilde{u}_n(t)) \subset F(t,\tilde{u}(t))
\end{equation*}
for almost all $t\in (0,T_*)$. As $F(t,\tilde{u}(t))$ is closed and convex, this yields, together with~\eqref{f_in_co},
\begin{equation} \label{f_in_F}
f(t)\in F(t,\tilde{u}(t))
\end{equation}
for almost all $t\in(0,T_*)$.
It remains to show that we can pass to the limit in \eqref{problem_n}, i.e., that $u$ and $\tau$ solve \eqref{linearised_equation_u_short} and \eqref{linearised_equation_tau}, respectively. Due to the weak convergences established in \eqref{weak_convergences}, we immediately obtain the convergences
\begin{equation*}
\begin{aligned}
\partial_t u_n\rightharpoonup \partial_t u\quad & \text{in } L^s(0,T_*;H_r), \\
\nabla\cdot \tilde{\tau}_n\mathrel{\vbox{\offinterlineskip\ialign{\hfil##\hfil\cr $\hspace{-0.4ex}\textnormal{\scriptsize{*}}$\cr \noalign{\kern-0.6ex} $\rightharpoonup$\cr}}} \nabla\cdot\tilde{\tau} \quad & \text{in } L^\infty(0,T_*;L^r),\\
\partial_t \tau_n \rightharpoonup \partial_t \tau \quad &\text{in } L^s(0,T_*;L^r),\\
\tau_n \mathrel{\vbox{\offinterlineskip\ialign{\hfil##\hfil\cr $\hspace{-0.4ex}\textnormal{\scriptsize{*}}$\cr \noalign{\kern-0.6ex} $\rightharpoonup$\cr}}} \tau \quad &\text{in } L^\infty(0,T_*;W^{1,r}), \\
D(\tilde{u}_n)\rightharpoonup D(\tilde{u}) \quad &\text{in } L^s(0,T_*;V_r).
\end{aligned}
\end{equation*}
We also have $A_r u_n\rightharpoonup A_r u$ in $L^s(0,T_*;H_r)$ since $u_n\rightharpoonup u$ in $L^s(0,T_*;D(A_r))$ and since $A_r\colon D(A_r)\to H_r$ is a linear, bounded operator and therefore weakly sequentially continuous, see, e.g., Zeidler~\cite[Proposition~21.81]{ZeidlerIIA}. In particular, all these convergences imply the weak convergence of the respective terms in $L^s(0,T_*;L^{r/2})$. Since we only want to show that $u$ and $\tau$ are a solution to~\eqref{linearised_equation_u} and~\eqref{linearised_equation_tau}, i.e., that the respective equations are fulfilled almost everywhere, it suffices to pass to the limit in~\eqref{problem_n} in $L^s(0,T_*;L^{r/2})$ (it would even be enough to pass to the limit in $L^1(0,T_*; L^1)$).
First, we show the weak convergence $(\tilde{u}_n\cdot\nabla)\tilde{u}_n \rightharpoonup (\tilde{u}\cdot\nabla)\tilde{u}$ in $L^s(0,T_*;L^{r/2})$. Let $\varphi\in L^{s'}(0,T_*;L^{r/(r-2)})$. We have
\begin{equation*}
\begin{aligned}
\langle (\tilde{u}_n\cdot\nabla)\tilde{u}_n -(\tilde{u}\cdot\nabla)\tilde{u},\varphi\rangle &= \langle (\tilde{u}_n\cdot\nabla)\tilde{u}_n -(\tilde{u}_n\cdot\nabla)\tilde{u},\varphi\rangle + \langle (\tilde{u}_n\cdot\nabla)\tilde{u}-(\tilde{u}\cdot\nabla)\tilde{u},\varphi\rangle \\
&\leq \Vert \tilde{u}_n\Vert_{L^{\infty}(H_r)} \,\Vert \tilde{u}_n-\tilde{u}\Vert_{L^s(V_r)} \,\Vert \varphi\Vert_{L^{s'}(L^{r/(r-2)})} \\
&\hspace{2.5cm} + \int_0^{T_*}\int_\Omega (\tilde{u}_n-\tilde{u})\cdot (\nabla \tilde{u}\, \varphi)\diff x\diff t. \\
\end{aligned}
\end{equation*}
Due to the embeddings $\mathcal{U}(T_*)\subset W^{1,s}(0,T_*;H_r)\subset L^\infty(0,T_*;H_r)$, the norm $\Vert \tilde{u}_n\Vert_{L^{\infty}(H_r)}$ is bounded, so the (strong) convergence $\tilde{u}_n\to \tilde{u}$ in $L^s(0,T_*;V_r)$ (cf. \eqref{strong_convergences}) implies that the first term vanishes as $n\to\infty$. The second term vanishes as well since $\nabla \tilde{u} \,\varphi\in L^1(0,T_*;L^{r/(r-1)})$ and $\tilde{u}_n\rightharpoonup \tilde{u}$ in $\mathcal{U}(T_*)$ and thus $\tilde{u}_n\mathrel{\vbox{\offinterlineskip\ialign{\hfil##\hfil\cr $\hspace{-0.4ex}\textnormal{\scriptsize{*}}$\cr \noalign{\kern-0.6ex} $\rightharpoonup$\cr}}} \tilde{u}$ in $L^\infty(0,T_*;H_r)$.
Similarly, we show the weak convergence of $(\tilde{u}_n\cdot\nabla)\tau_n$ to $(\tilde{u}\cdot\nabla)\tau$ in $L^s(0,T_*;L^{r/2})$. Let $\varphi\in L^{s'}(0,T_*;L^{r/(r-2)})$. We obtain
\begin{equation*}
\begin{aligned}
\langle (\tilde{u}_n\cdot\nabla)\tau_n -(\tilde{u}\cdot\nabla)\tau,\varphi\rangle &= \langle (\tilde{u}_n\cdot\nabla)\tau_n -(\tilde{u}\cdot\nabla)\tau_n,\varphi\rangle + \langle (\tilde{u}\cdot\nabla)\tau_n-(\tilde{u}\cdot\nabla)\tau,\varphi\rangle \\
&\leq \Vert \tilde{u}_n-\tilde{u}\Vert_{L^{s}(H_r)} \,\Vert \tau_n\Vert_{L^\infty(W^{1,r})} \,\Vert \varphi\Vert_{L^{s'}(L^{r/(r-2)})} \\
&\hspace{2.5cm}+ \int_0^{T_*}\int_\Omega \tilde{u}\, \nabla( \tau_n-\tau)\, \varphi\diff x\diff t.
\end{aligned}
\end{equation*}
Again, due to the strong convergence $\tilde{u}_n\to \tilde{u}$ in $L^s(0,T_*;V_r)$ and the weak convergence and thus the boundedness of $(\tau_n)$ in $L^\infty(0,T_*;W^{1,r})$, the first term vanishes as $n\to\infty$. Also, $\tilde{u}\,\varphi\in L^1(0,T_*;L^{r/(r-1)})$ and $\tau_n\mathrel{\vbox{\offinterlineskip\ialign{\hfil##\hfil\cr $\hspace{-0.4ex}\textnormal{\scriptsize{*}}$\cr \noalign{\kern-0.6ex} $\rightharpoonup$\cr}}} \tau$ in $L^\infty(0,T_*;W^{1,r})$ imply that the second term vanishes as well as $n\to\infty$.
Next, we show the weak convergence $g_a(\tau_n,\nabla\tilde{u}_n)\rightharpoonup g_a(\tau,\nabla\tilde{u})$ in $L^s(0,T_*;L^{r/2})$. Since $g_a(\tau_n,\nabla\tilde{u}_n)$ is a linear combination of $\tau_n\, \nabla \tilde{u}_n$ and $\nabla \tilde{u}_n\, \tau_n$, it is sufficient to show the weak convergences of these two terms. Let $\varphi\in L^{s'}(0,T_*;L^{r/(r-2)})$. We have
\begin{equation*}
\begin{aligned}
\langle\tau_n\, \nabla \tilde{u}_n-\tau\,\nabla\tilde{u}, \varphi\rangle & = \langle \tau_n\, (\nabla \tilde{u}_n - \nabla \tilde{u}),\varphi\rangle + \langle (\tau_n- \tau)\,\nabla\tilde{u},\varphi\rangle \\
& \leq \Vert \tau_n\Vert_{L^\infty(L^r)} \,\Vert \tilde{u}_n-\tilde{u}\Vert_{L^s(V_r)} \, \Vert\varphi\Vert_{L^{s'}(L^{r/(r-2)})} + \int_0^{T_*} \int_\Omega (\tau_n-\tau) \, \nabla\tilde{u}\, \varphi \diff x \diff t.
\end{aligned}
\end{equation*}
Similar to before, the weak convergence $\tau_n\mathrel{\vbox{\offinterlineskip\ialign{\hfil##\hfil\cr $\hspace{-0.4ex}\textnormal{\scriptsize{*}}$\cr \noalign{\kern-0.6ex} $\rightharpoonup$\cr}}} \tau$ in $L^\infty(0,T_*;W^{1,r})$ implies the boundedness of the norm $\Vert \tau_n\Vert_{L^\infty(L^r)}$. Then, the strong convergence $\tilde{u}_n\to \tilde{u}$ in $L^s(0,T_*;V_r)$ implies that the first term vanishes as $n\to \infty$. The second term vanishes since $\nabla \tilde{u} \,\varphi\in L^1(0,T_*;L^{r/(r-1)})$ and, again, $\tau_n\mathrel{\vbox{\offinterlineskip\ialign{\hfil##\hfil\cr $\hspace{-0.4ex}\textnormal{\scriptsize{*}}$\cr \noalign{\kern-0.6ex} $\rightharpoonup$\cr}}} \tau $ in $L^\infty(0,T_*;W^{1,r})$. Thus, $\tau_n\, \nabla \tilde{u}_n\rightharpoonup\tau\,\nabla\tilde{u}$ in $L^s(0,T_*;L^{r/2})$. Obviously, the weak convergence $\nabla\tilde{u}_n \,\tau_n \rightharpoonup \nabla\tilde{u}\,\tau$ in $L^s(0,T_*;L^{r/2})$ can be proven analogously.
Finally, we have to show the convergences of the initial values. The strong convergence $\tau_n\to\tau $ in $\mathscr{C}([0,T_*];L^r)$ (cf.~\eqref{strong_convergences}) immediately implies $\tau_n(0)\to \tau(0)$ in $L^r$. To prove the convergence of $(u_n(0))$, let $\varphi\in \mathscr{C}^1([0,T_*];H_r^*)$. Then, the weak convergences proven before and integration by parts yield
\begin{equation*}
\begin{aligned}
&\langle u_n(T_*), \varphi(T_*)\rangle - \langle u_n(0),\varphi(0)\rangle \\
& = \int_0^{T_*} \langle \partial_t u_n(t),\varphi(t)\rangle\diff t + \int_0^{T_*} \langle u_n(t) , \partial_t \varphi(t)\rangle\diff t \\
& \rightarrow \int_0^{T_*} \langle \partial_t u(t),\varphi(t)\rangle\diff t + \int_0^{T_*} \langle u(t) , \partial_t \varphi(t)\rangle\diff t \\
& = \langle u(T_*), \varphi(T_*)\rangle - \langle u(0),\varphi(0)\rangle
\end{aligned}
\end{equation*}
as $n\to \infty$. Choosing $\varphi(t)=(1-\frac{t}{T_*})\, \sigma$ for an arbitrary $\sigma \in H_r^*$ implies
\begin{equation} \label{convergence_initial_value}
\langle u_n(0),\sigma\rangle \to \langle u(0),\sigma\rangle
\end{equation}
for all $\sigma\in H_r^*$. Since we have $u_n(0)=u_0$ for all $n\in \mathbb{N}$, we obtain $u(0)=u_0$ in $H_r$.
Overall, we have shown that $u$ and $\tau$ are a solution to \eqref{linearised_equation_u_short} and \eqref{linearised_equation_tau}, respectively, so together with \eqref{f_in_F}, we have $(u,\tau)\in \Phi(\tilde{u}, \tilde{\tau})$.\qed \end{proof}
\section{Global existence for small data}
As in the single-valued case (cf. Fern\'andez-Cara, Guill\'en, and Ortega~\cite[Theorem~9.2]{FCGO02}), global existence of strong solutions can be obtained for small data. Since we cannot control $\Vert \tilde{u} \Vert_{L^\infty(H_r)}$ by choosing $T_*$ small enough anymore (cf.~estimate~\eqref{estimate_L_infty_H_r}), this requires a more specific growth condition on the set-valued right-hand side $F\colon [0,T]\times H_r \to \mathcal{P}_{fc}(L^r)$. We say that the assumptions {\textbf{(F')}} are fulfilled if \begin{itemize}
\item[\textbf{(F1)}] $F$ is measurable,
\item[\textbf{(F2)}] for almost all $t\in(0,T)$, the graph of the mapping $v\mapsto F(t,v)$ is sequentially closed in $H_r\times L^r_w$, and
\item[\textbf{(F3')}] $\vert{F(t,v)}\vert \leq b(t) \left(1+ \Vert{v}\Vert_{H_r}^{1+\varepsilon}\right)$ a.e. with $b\in L^s(0,T)$, $b\geq 0$ a.e. and $\varepsilon>0$. \end{itemize}
With these assumptions, we can prove the following result.
\begin{theorem} Let $\Omega\subset\mathbb{R}^3$ be open, bounded, and connected with $\partial\Omega\in \mathscr{C}^{2,\mu}$, $0<\mu<1$ and let $3<r<\infty$, $1<s<\infty$. Let $F\colon [0,T]\times H_r \to \mathcal{P}_{fc}(L^r)$ satisfy the assumptions \textbf{(F')}. Then, for each $T>0$, there exists an $\alpha_0\in (0,1)$ such that for all $\alpha\in (0,\alpha_0)$ and sufficiently small $u_0\in D^s_r$, $\tau_0\in W^{1,r}$ and $b\in L^s(0,T)$, there exist\footnote{The condition that $\alpha$ has to be chosen small enough means that the influence of Newtonian viscosity on the fluid flow has to be big enough. The Reynolds number Re and the Weissenberg number We can be chosen arbitrarily.}
\begin{equation*}
\begin{aligned}
u&\in L^s(0,T; D(A_r)) \quad\text{with}\quad \partial_t u\in L^s(0,T; H_r),\\
\tau &\in \mathscr C([0,T];W^{1,r})\quad\text{with}\quad \partial_t \tau\in L^s(0,T; L^r),\\
p&\in L^s(0,T; W^{1,r}),\\
\end{aligned}
\end{equation*}
such that $(u,\tau,p)$ is a solution to
\begin{equation} \label{problem_multivalued_global}
\left\{\begin{aligned}
\mathrm{Re} \left(\partial_t u + (u\cdot\nabla) u\right) -(1-\alpha)\Delta u - \nabla\cdot \tau+ \nabla p &\in F(\cdot ,u) \phantom{=02\alpha D(u)\tau_0f}\hspace{-1.5cm} \text{in } (0,T),\\
\mathrm{We}\left( \partial_t \tau + (u\cdot\nabla)\tau + g_a(\tau,\nabla u)\right) + \tau &= 2\alpha D(u) \phantom{\in 0\tau_0fF(\cdot,u)}\hspace{-1.5cm} \text{in } (0,T),\\
u(0)=u_0, \quad\quad \tau(0)&=\tau_0.
\end{aligned}\right.
\end{equation} \end{theorem}
\begin{proof}
We use the same method as in the proof of Theorem~\ref{thm_local}. Let $\mathcal{Y}(T)$, $\mathcal{X}(T)$ and $\Phi$ be defined as before. As seen in the estimates~\eqref{estimate_R_1_R_2}, for arbitrary $\alpha$, $R_1$, and $R_2$, the initial values $u_0$ and $\tau_0$ can be chosen small enough such that $\mathcal{Y}(T)$ is nonempty. Analogously to the proof before, $\mathcal{Y}(T)$ is also convex and compact and $\Phi$ is well-defined. We now show that
\begin{equation*}
\Phi(\mathcal{Y}(T))\subset \mathcal{Y}(T)
\end{equation*}
for $\alpha$, $u_0$, $\tau_0$, and $b$ sufficiently small (and $R_1$, $R_2$, and $R_3$ chosen appropriately). We again proceed similarly to the single-valued case. Let $(u,\tau)\in \Phi(\tilde{u}, \tilde{\tau})$ for an arbitrary $(\tilde{u}, \tilde{\tau})\in \mathcal{Y}(T)$ and let $f\in \mathcal{F}^s(\tilde{u})$ such that $u$ solves the single-valued problem~\eqref{linearised_equation_u_short} with $f$. As before, the a priori estimate for the solution to~\eqref{linearised_equation_u_short} yields (cf. Fern\'andez-Cara, Guill\'en, and Ortega~\cite[p.~575]{FCGO02})
\begin{equation} \label{estimate_u_global}
\Vert u\Vert_{L^s(D(A_r))}^s + \Vert \partial_t u\Vert_{L^s(H_r)}^s \leq \left(\frac{c_1}{1-\alpha}\right)^s \left( \Vert u_0\Vert_{D_r^s}^s \, c_5 \left( R_1^s \Vert u_0\Vert_{H_r}^s + R_1^{2s} T^{s-1} + R_2^s T + \Vert f\Vert_{L^s(L^r)}^s \right) \right)
\end{equation}
with the same constant $c_1$ as in the proof before and $c_5>0$ depending on Re, $r$, $s$, and $\Omega$. Using Assumption~\textbf{(F3')}, we have
\begin{equation*}
\Vert f\Vert_{L^s(L^r)}^s \leq 2^{s-1} \int_0^T b(t)^s \left( 1 + \Vert \tilde{u}(t)\Vert_{H_r}^{s(1+\varepsilon)}\right) \diff t.
\end{equation*}
Again, the estimate~\eqref{estimate_L_infty_H_r} and $(\tilde{u}, \tilde{\tau})\in \mathcal{Y}(T)$ yield
\begin{equation*}
\begin{aligned}
\Vert f\Vert_{L^s(L^r)}^s &\leq 2^{s-1} \Vert b \Vert_{L^s(0,T)}^s\left( 1 +\left( \Vert u_0 \Vert_{H_r} + R_1 T^{1/s'} \right)^{s(1+\varepsilon)}\right) \\
&\leq 2^{s-1} \Vert b \Vert_{L^s(0,T)}^s \left( 1 +2^{s(1+\varepsilon)-1}\left( \Vert u_0 \Vert_{H_r}^{s(1+\varepsilon)} + R_1^{s(1+\varepsilon)} T^{(s-1)(1+\varepsilon)} \right)\right) \\
&\leq c_6\;\!\Vert b \Vert_{L^s(0,T)}^s \left( 1 + \Vert u_0 \Vert_{H_r}^{s(1+\varepsilon)} + R_1^{s(1+\varepsilon)} T^{(s-1)(1+\varepsilon)+1} \right)
\end{aligned}
\end{equation*}
with $c_6=\max(2^{s-1},2^{s(1+\varepsilon)-1})$. Inserted in~\eqref{estimate_u_global}, we obtain
\begin{equation*}
\begin{aligned}
\Vert u\Vert_{L^s(D(A_r))}^s + \Vert \partial_t u\Vert_{L^s(H_r)}^s &\leq \left(\frac{c_1}{1-\alpha}\right)^s \left( \Vert u_0\Vert_{D_r^s}^s + c_5 \left( R_1^s \Vert u_0\Vert_{H_r}^s + R_1^{2s} T^{s-1} + R_2^s T \right.\right.\\
&\hspace{1cm}\left.\left. + c_6\;\!\Vert b \Vert_{L^s(0,T)}^s \left( 1 + \Vert u_0 \Vert_{H_r}^{s(1+\varepsilon)} + R_1^{s(1+\varepsilon)} T^{(s-1)(1+\varepsilon)+1} \right) \right) \right).
\end{aligned}
\end{equation*}
For $\tau$, we have the same estimates as before (cf. \eqref{a_priori_tau} and \eqref{a_priori_dt_tau}), i.e.,
\begin{equation*}
\Vert \tau \Vert_{L^\infty(W^{1,r})}\leq \left( \Vert \tau_0 \Vert_{W^{1,r}} + \frac{4\alpha}{c_3\mathrm{We}}\right)\exp\left(c_3\;\!R_1T^{1/s'}\right) = \Lambda
\end{equation*}
and
\begin{equation*}
\Vert \partial_t \tau \Vert_{L^s(L^r)}\leq c_4 \;\!\Lambda \left(R_1+ \frac{T^{1/s}}{c_3\mathrm{We}}\right).
\end{equation*}
Now, we can choose $\alpha$, $u_0$, $\tau_0$, and $b$ sufficiently small as well as $R_1$, $R_2$, and $R_3$ in such a way that the right-hand side of the last three inequalities can be estimated by $R_1^s$, $R_2$, and $R_3$, respectively: First, let $\alpha_1\in(0,1)$ be arbitrary. Now, choose $R_1$ small enough such that
\begin{equation*}
\left(\frac{c_1}{1-\alpha_1}\right)^s c_5 \left( R_1^{2s}T^{s-1} + c_6\,R_1^{s(1+\varepsilon)} T^{(s-1)(1+\varepsilon)+1} \right) < R_1^s,
\end{equation*}
e.g.,
\begin{equation*}
R_1 < \frac{1}{2}\min\left( \left( \left(\frac{c_1}{1-\alpha_1}\right)^{s} c_5 T^{s-1}\right)^{-1/s},\; \left(\left( \frac{c_1}{1-\alpha_1} \right)^{s} c_5c_6 \, T^{(s-1)(1+\varepsilon)+1}\right)^{-1/(s\varepsilon)} \right).
\end{equation*}
Next, choose $R_2$ small enough such that
\begin{equation*}
\left(\frac{c_1}{1-\alpha_1}\right)^s c_5 \left( R_1^{2s}T^{s-1} + R_2^s T + c_6\,R_1^{s(1+\varepsilon)} T^{(s-1)(1+\varepsilon)+1} \right) < R_1^s,
\end{equation*}
choose $\alpha_0\in (0,\alpha_1]$ small enough such that
\begin{equation*}
\frac{4\alpha_0}{c_3\mathrm{We}}\exp\left(c_3\;\!R_1T^{1/s'}\right) < R_2,
\end{equation*}
and choose $R_3$ big enough such that
\begin{equation*}
c_4 \left(R_1+ \frac{T^{1/s}}{c_3\mathrm{We}}\right)\frac{4\alpha_0}{c_3\mathrm{We}}\exp\left(c_3\;\!R_1T^{1/s'}\right) < R_3.
\end{equation*}
Finally, choose $u_0$, $\tau_0$ and $b$ sufficiently small such that
\begin{equation*}
\begin{aligned}
\left(\frac{c_1}{1-\alpha}\right)^s \left( \Vert u_0\Vert_{D_r^s}^s + c_5 \left( R_1^s \Vert u_0\Vert_{H_r}^s + R_1^{2s} T^{s-1} + R_2^s T \right.\right.\hspace{2cm}&\\
\left.\left. + c_6\;\!\Vert b \Vert_{L^s(0,T)}^s \left( 1 + \Vert u_0 \Vert_{H_r}^{s(1+\varepsilon)} + R_1^{s(1+\varepsilon)} T^{(s-1)(1+\varepsilon)+1} \right) \right) \right) &\leq R_1^s,\\
\left( \Vert \tau_0 \Vert_{W^{1,r}} + \frac{4\alpha}{c_3\mathrm{We}}\right)\exp\left(c_3\;\!R_1T^{1/s'}\right) &\leq R_2,\\
c_4 \;\!\Lambda \left(R_1+ \frac{T^{1/s}}{c_3\mathrm{We}}\right) &\leq R_3,
\end{aligned}
\end{equation*}
and thus $\Phi(\mathcal{Y}(T))\subset \mathcal{Y}(T)$. Finally, we can show that $\Phi$ is closed analogously to the proof before. Thus, we can now apply the generalisation of Kakutani's fixed-point theorem (see Glicksberg~\cite{Glicksberg} and Fan~\cite{Fan}) to obtain the existence of a fixed point and therefore a solution to problem~\eqref{problem_multivalued_global}.\qed \end{proof}
\end{document} | arXiv |
Publications and Preprints (before 2003)
Paulo R. C. Ruffino
Informations in this page has not being updated since 2003. Until we have a new version of this page, please, collect this and other CV informations in the Mathematical Review (MathSciNet) or in the Curriculum Vitae Lattes.
Paulo Ruffino, 04th April 2010.
Stochastic exponential in Lie groups and its applications.
Preprint, February/2003. 10 pages (ps) .
Geometric aspects of stochastic delay differenctial equations on manifolds.
Preprint, August/2002. 12 pages (ps) .
Asymptotic angular stability in non-linear systems: rotation numbers and winding numbers.
Preprint, July/2002. 18 pages (ps) .
(With Patrick E. McSharry ).
Non-linear Iwasawa decomposition of stochastic flows: geometrical characterization and examples.
To appear in the Proceedings of Semigroup Operators: Theory and Applications - SOTA2 , Rio de Janeiro September 10-14 2001. 10 pages, (pdf) .
Let $\varphi_t$ be the stochastic flow of a stochastic differential equation on a Riemannian manifold $M$ of constant curvature. For a given initial condition in the orthonormal frame bundle: $x_0\in M$ and $u$ an orthonormal frame in $T_{x_0}M$, there exists a unique decomposition $\varphi_t=\xi_t \circ \Psi_t$ where $\xi_t$ is isometry, $\Psi_t$ fixes $x_0$ and $D\Psi_t(u)=u\cdot s_t$ where $s_t$ is an upper triangular matrix process. We present the results and the main ideas by working in detailed examples.
Decomposition of stochastic flows and rotation matrix.
Stochastics and Dynamics Vol. 2 (1), 2002.
We provide geometrical conditions on the manifold for the existence of the Liao's factorization of stochastic flows (PTRF 25 (3), 2000). If $M$ is simply connected and has constant curvature then this decomposition holds for any stochastic flow, conversely, if every flow on $M$ has this decomposition then $M$ has constant curvature. Under certain conditions, it is possible to go further on the factorization: $ \varphi_t = \xi_t \circ \Psi_t \circ \Theta_t$, where $\xi_t$ and $\Psi_t$ have the same properties of Liao's decomposition and $(\xi_t \circ \Psi_t)$ are affine transformations on $M$. We study the asymptotic behaviour of the isometric component $\xi_t$ via rotation matrix, providing a Furstenberg-Khasminskii formula for this skew-symmetric matrix.
Regular Conditional Probability, Disintegration of Probability and Radon Spaces.
(With D. Leão Pinto Jr. and Marcelo Dutra Fragoso). Preprint, 12 pages, (dvi) or (ps).
We establish equivalence of several regular conditional probability properties and Radon space. In addition, we introduce the universally measurable disintegration concept and prove an existence result.
Random Versions of Hartman-Grobman Theorem.
Preprint IMECC, UNICAMP no. 27/01 (2001). 37 pages, (dvi) .
(With Edson Alberto Coayla Teran ).
We present versions of Hartman-Grobman theorems for random dynamical systems (RDS) in the discrete and continuous case. We apply the same random norm used by Wanner (Dynamics Reported, Vol. 4, Springer, 1994), but instead of using difference equations, we perform an apropriate generalization of the deterministic arguments in an adequate space of measurable homeomorphisms to extend his result with weaker hypotheses and simpler arguments.
Lyapunov Exponents for Stochastic Differential Equations in Semi-simple Lie groups
Archivum Mathematicum (Brno), Vol. 37 (3), (2001).
(With Luiz Antonio Barrera San Martin ).
We write an integral formula for the asymptotics of the A-part in the Iwasawa decomposition of the solution of an invariant stochastic equation in a semi-simple group. The integral is with respect to the invariant measure on the maximal flag manifold, the Furstenberg boundary. The integrand of the formula is related to the Takeuchi-Kobayashi Riemannian metric in the flag manifold.
A Fourier analysis of white noise via canonical Wiener space.
Proceedings of the 4th Portuguese Conference on Automatic Control. 04-06 October 2000,
ISBN 972-98603-0-0 , pp. 144-148, 2000.
We present a Fourier analysis of the white noise, where this process is considered as the formal derivative of the Brownian motion in the time interval [0,T] with T \geq 0. By a convenient construction of an isomorphism of abstract Wiener space we identify each trajectory of the white noise with a sequence of complex numbers whose modulus and argument represent respectively the amplitude and the phase of each harmonic component exp {i(pi/T)nt} of this (formal) stochastic trajectory.
Wiener Integral in the space of sequences of real numbers.
Archivum Mathematicum (BRNO) , Vol. 36 (2), pp. 95-101, (2000).
(With Alexandre de Andrade).
Let i:H --> W be the classical Wiener space , where H is the Cameron-Martin space and W={\sigma :[0,1] --> R continuous with \sigma(0) =0}. We extend the canonical isometry H --> l_{2} to a linear isomorphism \Phi :W --> V \subset R^{\infty} which pushes forward the Wiener structure into the abstract Wiener space i:l_{2} --> V . The Wiener integration assumes a new interesting face when it is taken in this space.
A sampling theorem for rotation numbers of linear processes in R^2.
Random Operators and Stochastic Equations, , Vol. 8 (2), pp. 175-188, (2000).
We prove an ergodic theorem for the rotation number of the composition of a sequence os stationary random homeomorphisms in $S^{1}$. In particular, the concept of rotation number of a matrix $g\in Gl^{+}(2,{\Bbb R})$ can be generalized to a product of a sequence of stationary random matrices in $% Gl^{+}(2,{\Bbb R})$. In this particular case this result provides a counter-part of the Osseledec's multiplicative ergodic theorem which guarantees the existence of Lyapunov exponents. A random sampling theorem is then proved to show that the concept we propose is consistent by discretization in time with the rotation number of continuous linear processes on ${\Bbb R}^{2}.$
Characterizations of Radon Spaces.
Statistics and Probability Letters, , Vol. 48 (4), pp. 409-413, 1999.
(With D. Leão Pinto Jr. and Marcelo Dutra Fragoso).
Assuming hypothesis only on the $\sigma $-algebra ${\cal F},$ we characterize (via Radon spaces) the class of measurable spaces ($\Omega ,{\cal F})$ that admits regular conditional probability for all probabilities on ${\cal F}$.
Matrix of rotation for stochastic dynamical systems.
Computacional and Applied Mathematics,, Vol. 18 (2), pp. 213-226, 1999.
Matrix of rotation generalizes the concept of rotation number for stochastic dynamical systems given in Ruffino (Stoch. Stoch. Reports, 1997). This matrix is the asymptotic time average of the Maurer--Cartan form composed with the Riemannian connection along the induced trajectory in the orthonormal frame bundle $OM$ over an $n$-dimensional Riemannian manifold $M$. It provides the asymptotic behaviour of an orthonormal $n$-frame under the action of the derivative flow and the Gram--Schmidt orthonormalization. We lift the stochastic differential equation of the system on $M$ to a stochastic differential equation in $OM$ and we use Furstenberg-Khasminskii argument to prove that the matrix of rotation exists almost surely with respect to invariant measures on this bundle.
Rotation number for stochastic dynamical systems.
Stochastics and Stochastics Reports , Vol. 60, pp. 289-318, 1997.
Rotation number is the asymptotic time average of the angular rotation of a given tangent vector under the action of the derivative flow in the tangent bundle over a Riemannian manifold $M$. This angle in higher dimension is taken with respect to a reference given by the stochastic parallel transport along the trajectories and the canonical connection in the Stiefel bundle $St_2 M$. So, these numbers give an angular complementary information of that one given by the Lyapunov exponents. We lift the stochastic differential equation on $M$ to a stochastic equation in the Stiefel bundle and we use Furstenberg-Khasminskii argument to prove the existence almost surely of the rotation numbers with respect to any invariant measure on this bundle. Finally we present some information of the dynamical system provided by the rotation number: rotation of the stable manifold (Theorem 6.4).
Artigos de Divulgação
An apology for "A Mathematician's Apology" by G. H. Hardy.
Preprint: 6 pages, (dvi) , (ps) or (pdf) .
O problema da corda suspensa.
Matemática Universitária (SBM) , no. 24/25 , pp. 1-9 , junho/dezembro 1998.
A Física: no mundo micro e macroscópico.
Revista Brasileira de Ensino de Física , vol. 21 , no. 2, junho 1999.
Mathematics Department - Imecc - Unicamp
Rua Sérgio Buarque de Holanda, 651
13083-859 Campinas, SP
fax: 55-(0)19- 3289 5766
fone: 55-(0)19- 3521 6033 (office)
Email: [email protected]
Last modified 19 August 2003. | CommonCrawl |
Nano Convergence
Polymeric nanoparticles containing diazepam: preparation, optimization, characterization, in-vitro drug release and release kinetic study
Sarvesh Bohrey1,
Vibha Chourasiya1 &
Archna Pandey1
Nano Convergence volume 3, Article number: 3 (2016) Cite this article
Nanoparticles formulated from biodegradable polymers like poly(lactic-co-glycolic acid) (PLGA) are being extensively investigated as drug delivery systems due to their two important properties such as biocompatibility and controlled drug release characteristics. The aim of this work to formulated diazepam loaded PLGA nanoparticles by using emulsion solvent evaporation technique. Polyvinyl alcohol (PVA) is used as stabilizing agent. Diazepam is a benzodiazepine derivative drug, and widely used as an anticonvulsant in the treatment of various types of epilepsy, insomnia and anxiety. This work investigates the effects of some preparation variables on the size and shape of nanoparticles prepared by emulsion solvent evaporation method. These nanoparticles were characterized by photon correlation spectroscopy (PCS), transmission electron microscopy (TEM). Zeta potential study was also performed to understand the surface charge of nanoparticles. The drug release from drug loaded nanoparticles was studied by dialysis bag method and the in vitro drug release data was also studied by various kinetic models. The results show that sonication time, polymer content, surfactant concentration, ratio of organic to aqueous phase volume, and the amount of drug have an important effect on the size of nanoparticles. Hopefully we produced spherical shape Diazepam loaded PLGA nanoparticles with a size range under 250 nm with zeta potential −23.3 mV. The in vitro drug release analysis shows sustained release of drug from nanoparticles and follow Korsmeyer-Peppas model.
Nanotechnology defines the study and production of structures and devices on a nanoscale range. Nanoparticles have been extensively studied by researchers in biomedical and biotechnological areas, especially in drug delivery systems, because their particle size they have the potential to increase drug stability, improve its duration of therapeutic effect and reduce its degradation metabolism as well as cellular uptake [1]. Preparation and characterization of nanoparticles are today's an important task for researchers, as selection of size and shape of nanoparticles provides a capable control over many of the physical and chemical properties [2]. Different materials can be used to form these nanoparticles such as polymers, lipids, natural biopolymer etc., but biodegradable and biocompatible polymers have been widely used for the preparation of polymeric nanoparticles [3].
Poly(D,L-lactide-co-glycolide) (PLGA) polymers, have attracted significant interest for delivery systems because:
They are biocompatible, biodegradable and less toxic [4].
Approved by the FDA (US Food and Drug Administration) [5].
The exclusive structure of PLGA nanoparticles, composed of a hydrophilic surface and a hydrophobic core, provides a drug carrying reservoir and also enables them to dissolve in aqueous solutions [6].
The by-products of PLGA are the lactic acid and glycolic acid, can be excreted from the body as water and carbon-dioxide through the tricarboxylic acid cycle [7, 8].
The most remarkable one is probably the carrier delivery system that encapsulates active ingredients and releases them under a controlled mechanism [9].
Various methods are proposed for the preparation of drug loaded PLGA nanoparticles such as an emulsion solvent evaporation method [10], nanoprecipitation method [11], double emulsion solvent evaporation method [12] etc. Many stabilizers are used to prevent the aggregation of these nanoparticles [13] and different organic solvents are used to dissolve the polymer and drug [14].
Diazepam is a lipophilic benzodiazepine derivative drug. Benzodiazepines are considered the treatment of choice for acute management of cruel seizures. Benzodiazepines are active against a wide range of seizure types, have a rapid onset of action once delivered into the central nervous system, and are safe [15]. The IUPAC name of diazepam is 7-chloro-1,3-dihydro-1-methyl-5- phenyl-1,4-benzodiazepin-2(3H)-one, it is widely used as an anticonvulsant in the treatment of various types of epilepsy, insomnia, anxiety and for induction and maintenance of anesthesia [16]. Figure 1 shows the chemical structure of diazepam.
Chemical structure of Diazepam
Diazepam could be administered via different routes: orally, intravenous injections, rectal solutions, rectal gels, and suppositories [17]. Generally, oral administration of Diazepam is the route of choice in the daily practice of pharmacotherapy. However, abuse of diazepam can have serious consequences, even causing death when taken in overdose [18].
On the basis of literature reviewed, it was found that several authors reported the preparation of Diazepam loaded nanoparticles with different method by using different stabilizer, but none of them previously reported the preparation and optimization of PLGA nanoparticles containing Diazepam by emulsion-solvent evaporation method by using PVA as stabilizer.
The aim of the work to report here was to design and characterize Diazepam-loaded PLGA nanoparticles in order to obtain a controlled release system. Nanoparticles were prepared by the emulsion solvent evaporation method and characterized the formulation in terms of size, morphology, drug encapsulation and drug release. This work also investigated the effect of different preparation variables such as sonication time, polymer content, organic phase volume to aqueous phase volume ratio, surfactant content and amount of drug. The in vitro drug release and the drug release data of drug loaded nanoparticles was studied by dialysis bag method and various kinetic models respectively. The significance of this work is to explain the effect of different parameters scientifically which are responsible to control the size of nanoparticles prepared by this method.
The biodegradable polymer studied was PLGA (RESOMER® RG 504 molecular weight range is 38,000–54,000 and inherent viscosity is 0.45–0.60 dl/g) with a copolymer ratio of dl-lactide to glycolide of 50:50 gifted from Evonik Mumbai (India). The surfactant used in this process was polyvinyl alcohol (PVA) purchased from Sigma-Aldrich, Mumbai (India). Diazepam was received as gift sample from Windlas Biotech Ltd, Dehradun (India). Purified water of Milli-Q quality was used to prepare the solutions as well as the aqueous phases of the emulsions. All other reagents were of analytical grade.
Preparation of diazepam loaded nanoparticles
The diazepam loaded nanoparticles were prepared by an emulsion solvent evaporation method [10]. Typically, known amounts of mass of PLGA polymer and diazepam were added into ethyl-acetate, which was suitably stirred to ensure that all material was properly dissolved in solvent. Then, the solution of organic phase was slowly poured into the stirred aqueous solution of PVA. This mixture was sonicated using a microtip probe sonicator energy output of 55 W in a continuous mode (Soniweld Probe Sonicator, Imeco Ultrasonics, India) for a few minutes. The formed oil in water (O/W) emulsion was gently stirred at room temperature by a magnetic stirrer (Remi, India) for 5 hours to evaporate the organic solvent. The nanoparticles were recovered by centrifugation (22,000 rpm, 25 min; WX ultra 100 ultracentrifuge Thermofisher Scientific USA) and washed with distilled water 2–3 times to remove the surfactant. The purified nanoparticles were freeze-dried (YSI-250, Yorco Freeze Dryer (Lyophilizer), Yorco Sales Pvt. Ltd., India) to obtain the fine powder of nanoparticles, which was placed and kept in vacuum desiccators.
Nanoparticles characterization
The size (Z-average mean) and zeta potential of the nanoparticles were analyzed by photon correlation spectroscopy (PCS) or dynamic light scattering (DLS), respectively, in triplicate using a Zetasizer (Model- ZEN 3600, Malvern Instruments, U.K.). The dried powder samples were suspended in distilled water and slightly sonicated before analysis. The obtained homogeneous suspension was measured for the volume mean diameter and size distribution. Each measurement was done in triplicate. The shape, surface morphology and size analysis of the nanoparticles were analyzed by transmission electron microscopy (TECNAI 200 kV TEM (Fei, Electron Optics) Japan). A droplet of the nanoparticles was placed on a carbon-coated copper grid, forming a thin liquid film. The negative staining of samples was obtained with a 2 % (w/V) solution of phosphotungstate acid.
Entrapment efficiency
Nanoparticles were separated from dispersion by centrifugation at 22,000 rpm for 25 min. The supernatant obtained after centrifugation was suitably diluted and analyzed for free diazepam by UV–Visible spectrophotometer (Model No.-2201, UV–visible double beam spectrophotometer, Shimadzu, India) at 325 nm. The percentage entrapment efficiency was calculated as:
$$\% {\text{ Entrapment efficiency}} = \frac{{\left[ {Drug} \right]total - \left[ {Drug} \right]supernant }}{{\left[ {Drug} \right]total}}\varvec{ } \times 100$$
In-vitro drug release
The in-vitro drug release study of diazepam loaded PLGA nanoparticles formulations were studied by dialysis bag diffusion method [19]. Drug loaded nanoparticles (5 ml) were dispersed into dialysis bag and the dialysis bag was then kept in a beaker containing 100 ml of pH 7.4 phosphate buffer. The beaker was placed over a magnetic stirrer and the temperature of the assembly was maintained at 37 ± 1 °C throughout the experiment. During the experiment rpm was maintained at 100 rpm. Samples (2 ml) were withdrawn at a definite time intervals and replaced with equal amounts of fresh pH 7.4 phosphate buffer. After suitable dilutions the samples were analyzed using UV–Visible spectrophotometer at 325 nm.
To analyze the in vitro drug release data various kinetic models were used to describe the release kinetics. The zero order rate Eq. (2) explains the systems where the rate of drug release does not depend on its concentration [20]. The first order Eq. (3) explains the release from the system where rate of drug release is concentration dependent [21]. Higuchi [22] described the release of drugs from insoluble matrix as a square root of time dependent process based on Fickian diffusion Eq. (4). Korsmeyer et al. [23] derived a simple mathematical relationship which described the drug release from a polymeric system Eq. (5).
$$C \, = \, k_{o} t$$
where, C is the concentration of drug at time t, t is the time and k0 is zero-order rate constant expressed in units of concentration/time.
$$Log \, C_{0} {-} \, Log \, C \, = \, k_{1} t / 2.303$$
where, C0 is the initial concentration of drug and k1 is the first order rate constant.
$$C \, = \, K_{H} \sqrt t$$
where, KH is the constant reflecting the design variables of the system.
$$M_{t} / \, M_{\infty } = \, K_{KP} t^{n}$$
where Mt/M∞ is the fraction of drug released at time t, KKP is the rate constant and n is the release exponent.
Effect of different preparation variables on formulation characteristics
By using the emulsion solvent evaporation technique, several process parameters were tested to achieve best preparation conditions, including time of sonication, amount of polymer in the formulation, surfactant content in the formulation, organic to aqueous phase volume ratio and diazepam content. Only one factor was replaced in each series of experiments.
Effect of sonication time
In emulsion solvent evaporation technique the fundamental step is the addition of energy to obtain the emulsion and it is provided by sonication. To explain the influence of sonication time on nanoparticles shape and size, it was varied from 1 to 5 min. The preparation procedure yielded spherical particles in all cases according to TEM results. It can be concluded that on increasing the sonication time (from 1 to 5 min) the applied energy increases, so this leads to a decrease in the size of nanoparticles (from 430 to 265 nm), these results are summarized by in Fig. 2 by graph (a) and TEM image for nanoparticles prepared by 5 min sonication in shown in Fig. 3 by image (a). The emulsification can be considered one of the most significant steps of this technique, because the formation of large particles is the outcome of an insufficient dispersion of phases. Our results are in accordance with those observed by other authors [24, 25].
Effect of a sonication time on the size of nanoparticles b PLGA content in organic phase in mg/ml on the size of nanoparticles c volume of organic phase in ml on the size of nanoparticles d volume of PVA content in %w/V of aqueous phase and e diazepam content in mg/ml of organic phase
TEM image for nanoparticles prepared by a 5 min sonication b 10 mg/ml PLGA of organic phase c 5 ml of organic solvent d 1 % (w/V) PVA of aqueous phase and e 1.5 mg/ml diazepam of organic phase (or optimization o different variables)
Effect of PLGA content
To verify the influence of the effect of polymer content, it was varied between 10 to 20 mg/ml of organic phase, and the effect of the initial amount of polymer on the particles morphology and size were studied. The results are concluded by graph (b) in Fig. 2.
According to TEM results nanoparticles prepared with different amount of polymer are spherical in shape, but on increasing the polymer content lead to a regular increase in nanoparticles diameter. When the amount of PLGA was doubled from 10 to 20 mg/ml of organic phase, the particle diameter increased from 270 to about 390 nm. In Fig. 3 image (b) exhibits the TEM image of nanoparticles prepared by using the PLGA as 10 mg/ml of organic phase. According to this result, it can be conclude that, for this technique the polymer content in the organic phase is a significant factor, because the size of nanoparticles increased as polymer concentration was increased. The increase in the particle size with an increase in polymer amount was reported by other authors [26, 27]. This was probably caused by the increasing viscosity of dispersed phase (organic phase), resulting a low dispersability of the PLGA solution into the aqueous phase. Increase in polymer concentration leads to an increase in the viscous forces resisting the droplet break down by sonication. These forces oppose the shear stresses in the organic phase and the final size of particles depends on the net shear stress, which is available for droplet breakdown [24].
Effect of organic phase volume to aqueous phase volume ratio
The ratio of organic phase and aqueous phase of an emulsion is an important factor in this technique. The organic volume was varied among 1–5 mL by keeping constant the aqueous phase volume, and its effect on nanoparticles size was observed and summarized in graph (c) in the Fig. 2 and TEM result by image (c) in Fig. 3. From the results it was observed that an increase in the organic/aqueous ratio leads to decrease of the size of nanoparticles from 430 to 345 nm. This occurs due to the coalescence of droplets can be prevented by a large amount of organic solvent available for diffusion in the emulsion.
Effect of PVA content
To study the effect of PVA content on the nanoparticles, the aqueous phase with different PVA content (0.6 to 1.2 % w/V) was prepared. It can be noticed that as the PVA content is increased, the diameter of nanoparticles, first decreases and then gradually increases, the results are shown in graph (d) in Fig. 2 and TEM image (d) in Fig. 3. The presence of PVA molecules stabilizes the emulsion nanodroplets and prevents them from aggregation with one another. For a better stabilization, the surfactant molecules must cover the organic/aqueous interfacial area of all the droplets. Hence a lowest amount of PVA molecules is required to achieve small size of nanoparticles. As the concentration of PVA is increased, the size of particles produced by this method decreases and then increases due to the increased viscosity of the aqueous phase; the viscosity increase reduces the net shear stress available for droplet breakdown (which is already discussed in "Effect of PLGA content" section). So the size decreases due to enhanced interfacial stabilization whiles it increases due to the increased aqueous phase viscosity. The amount of surfactant plays an important role in this technique [28, 29].
Effect of diazepam content
In this section the effect of diazepam into PLGA nanoparticles was examined. Maintaining constant all other formulation variables, the amount of diazepam used was varied 1.5 to 2.5 mg/ml of organic solvent. It can be concluded that the increase in the initial amount of drug increases the size of nanoparticles from 230 to 310 nm; results are summarized by graph (e) in Fig. 2. This can be explained by the fact that a greater amount of drug results in a more viscous organic phase (dispersed phase), making complex the mutual dispersion of the phases and forming bigger nanoparticles. TEM experiments showed that the particles remained with a spherical shape in all cases, in Fig. 3 image (e) shows the TEM picture of nanoparticles prepared by drug in amount 1.5 mg/ml of organic solvent.
On the basis of the above discussion optimized diazepam loaded-PLGA nanoparticles were successfully prepared in the size range 230 nm. For this purpose, optimized variables are as: PLGA 10 mg/ml of organic phase, PVA 1 % w/V of aqueous phase, 5 ml of ethyl-acetate was used as organic solvent, diazepam used as 1.5 mg/ml of organic phase and sonication time was 5 min. By this optimized formulation we had got spherical nanoparticles with a size range in 230 nm and zeta potential as −23.3 mV and drug entrapment efficiency as 66 %. TEM image of these nanoparticles is shown in Fig. 3(e), zeta size image and zeta potential of these nanoparticles in Fig. 4.
a DLS image and b Zeta potential graph for nanoparticles prepared by optimization of different variables
In-vitro drug diffusion studies were carried out using dialysis bag method. The data of percentage drug release formulation were shown in Fig. 5. For kinetic study following plots were made: cumulative % drug release vs. time (zero order kinetic model); log cumulative % drug remaining vs time (first order kinetic model); cumulative % drug release vs square root of time (Higuchi model); log cumulative % drug release vs log time (Korsmeyer–Peppas model). All Plots are shown in Fig. 6 and results are summarized in Table 1. In the above table R2 is correlation value, k is rate constant and n is release exponent. On the basis of best fit with the highest correlation (R2) value it is concluded that in the optimized formulation of nanoparticles follow the Korsmeyer-Peppas model with release exponent value n = 0.61. The magnitude of the release exponent n indicates the release mechanism is non Fickian diffusion.
In-vitro drug release for nanoparticles prepared by optimization of different variables
Drug release kinetics plots: a Zero order plot b First order plot c Higuchi plot and d Korsmeyer Peppas plot
Table 1 Interpretation of R-square values and rate constants of release kinetics of nanoparticles
From the above investigation, we can conclude that the preparation of drug loaded nanoparticles by emulsion solvent evaporation method is governed by different preparation variables. By the systematic study of these variables, we got the valuable results. In this technique the most important factor for reducing the size of nanoparticles is increase the shear stress during emulsification, which is done by increasing the applied energy, decreasing the polymer content in organic solvent, using the sufficient amount of surfactant, increasing the organic phase volume to aqueous phase volume ratio. On the basis of optimization of these variables we have successfully synthesized the spherical nanoparticles of diazepam which are reproducible.
PLGA:
poly(lactic-co-glycolic acid)
PVA:
TEM:
V.J. Mohanraj, Y. Chen, Trop. J. Pharm. Res. 5(1), 561 (2006)
M.A. Dar, A. Ingle, M. Rai, Nanomed. Nanotechnol. Biol. Med. 9, 105 (2013)
J.P. Raval, D.R. Naik, K.A. Amin, P.S. Patel, J. Saudi. Chem. Soc. 18, 566 (2014)
B.C.M. te Boekhorst, L.B. Jensen, S. Colombo, A.K. Varkouhi, R.M. Schiffelers, T. Lammers, G. Storm, H.M. Nielsen, G.J. Strijkers, C. Foged, K. Nicolay, J. Control. Release 161, 772 (2012)
G.V. Peter Christoper, C.V. Raghavan K. Siddharth, M.S. Selva Kumar, R.H. Prasad, Saudi Pharm. J. 22,133 (2014)
H.K. Makadia, S.J. Siegel, Polymers 3, 1377 (2011)
E. Locatelli, M.C. Franchini, J. Nanopart. Res. 14, 1316 (2012)
G. Tansık, A. Yakar, U. Gunduz, J. Nanopart. Res. 16, 2171 (2014)
A.N. Ford Versypt, D.W. Pack, R.D. Braatz, J. Control. Release 165, 29 (2013)
T. Nahata, T.R. Saini, J. Microencapsul. 25(6), 426 (2008)
C.E. Mora-Huertas, O. Garrigues, H. Fessi, A. Elaissari, Eur. J. Pharm. Biopharm. 80, 235 (2012)
B. Semete, L. Booysen, Y. Lemmer, L. Kalombo, L. Katata, J. Verschoor, H.S. Swai, Nanomed. Nanotechnol. Biol. Med. 6, 662 (2010)
S.H. Kim, J.H. Jeong, K.W. Chun, T.G. Park, Langmuir 21, 8852 (2005)
K.C. Song, H.S. Lee, I.Y. Choung, K.I. Cho, Y. Ahn, E.J. Choi, Colloids. Surf. A. Physicochem. Eng. Asp. 276, 162 (2006)
G. Abdelbary, R.H. Fahmy, AAPS. PharmSciTech 10(1), 211 (2009)
J. Riss, J. Cloyd, J. Gates, S. Collins, Acta Neurol. Scand. 118, 69 (2008)
M. Carceles, A. Ribó, R. Dávalos, T. Martinez, J. Hernández, Clin. Ther. 26, 737 (2004)
W.A. Watson, T.L. Litovitz, W.K. Schwartz, G.C. Rodgers, J. Youniss, N. Reid, W.G. Rouse, R.S. Rembert, D. Borys, Am. J. Emerg. Med. 22, 335 (2004)
Y.C. Kyo, J.F. Chung, Colloids. Surf. B. Biointerfaces 83, 299 (2011)
S. Dash, P.N. Murthy, L. Nath, P. Chowdhury, Acta Pol. Pharm. 67(3), 217 (2010)
P. Costa, J.M. Sousa Lobo, Eur. J. Pharm. Sci. 13, 123 (2001)
T. Higuchi, J. Pharm. Sci. 52, 1145 (1963)
R.W. Korsmeyer, R.D. Gurny, E.M. Doelker, P. Buri, N.A. Peppas, Int. J. Pharm. 15, 25 (1983)
D. Quintanar-Guerrero, H. Fessi, E. All´emann, E. Doelker, Int. J. Pharm. 143, 133 (1996)
H.Y. Kwon, J.Y. Lee, S.W. Choi, Y. Jang, J.H. Kim, Colloid. Surf. A. 182, 123 (2001)
H. Murakami, M. Kobayashi, H. Takeuchi, Y. Kawashima, Int. J. Pharm. 187, 143 (1999)
S. Desgouille, C. Vauthier, D. Bazile, J. Vacus, J.L. Grossiord, M. Veillard, P. Couvreur, Langmuir 19, 9504 (2003)
M.L.T. Zweers, D.W. Grijpma, G.H.M. Engbers, J. Feijen, J. Biomed. Mater. Res. Part. B. Appl. Biomater. 66B, 559 (2003)
S. Feng, G. Huang, J. Control. Release 71, 53 (2001)
AP provided the direction and guidance for the work. SB and VC carried out all experiments. All authors participated in manuscript preparation and involved in the result discussions. All authors read and approved the final manuscript.
The authors are very grateful to Windlas Biotech Ltd. Dehradun for providing a gift sample of diazepam and Evonik Mumbai for providing the gift sample of PLGA. The facilities provided by Head, Department of Chemistry, and Head, Department of Pharmaceutical Science, Dr. Harisingh Gour Central University, Sagar (M.P.) are also acknowledged. The authors are also grateful to Electron Microscope Unit AIIMS, New Delhi for TEM analysis and Department of Pharmaceutical Science RGPV, Bhopal (M.P.) for analysis of DLS.
Department of Chemistry, Dr. Harisingh Gour University, Sagar, Madhya Pradesh, 470003, India
Sarvesh Bohrey, Vibha Chourasiya & Archna Pandey
Sarvesh Bohrey
Vibha Chourasiya
Archna Pandey
Correspondence to Sarvesh Bohrey.
Bohrey, S., Chourasiya, V. & Pandey, A. Polymeric nanoparticles containing diazepam: preparation, optimization, characterization, in-vitro drug release and release kinetic study. Nano Convergence 3, 3 (2016). https://doi.org/10.1186/s40580-016-0061-2
Biodegradable polymer
Emulsion solvent evaporation technique
Release kinetic models | CommonCrawl |
Europe/Prague English
ACAT 2014
Europe/Prague timezone
Travel to Prague
Bulletin 1 (PDF)
Hi res Poster (PDF)
Invitation leaflet (PDF)
Offline web
Veřejná přednáška (CZ)
Conference photo
Lukas Fiala
[email protected]
Data Analysis - Algorithms and Tools
1 Sep 2014, 14:00
Faculty of Civil Engineering, Czech Technical University in Prague Thakurova 7/2077 Prague 166 29 Czech Republic
Data Analysis - Algorithms and Tools: Monday
Martin Spousta (Charles University)
Data Analysis - Algorithms and Tools: Tuesday
Alina Gabriela Grigoras (CERN)
Data Analysis - Algorithms and Tools: Thursday
Contribution list Timetable
11. The Matrix Element Method within CMS
Camille Beluffi (Universite Catholique de Louvain (UCL) (BE))
The Matrix Element Method (MEM) is unique among the analysis methods used in experimental particle physics because of the direct link it establishes between theory and event reconstruction. This method was used to provide the most accurate measurement of the top mass at the Tevatron and since then it was used in the discovery of electroweak production of single top quarks . The method can in...
30. Developments in the ATLAS Tracking Software ahead of LHC Run 2
Nicholas Styles (Deutsches Elektronen-Synchrotron (DE))
After a hugely successful first run, the Large Hadron Collider (LHC) is currently in a shut-down period, during which essential maintenance and upgrades are being performed on the accelerator. The ATLAS experiment, one of the four large LHC experiments has also used this period for consolidation and further developments of the detector and of its software framework, ahead of the new challenges...
26. Delphes 3: A modular framework for fast simulation of a generic collider experiment
Alexandre Jean N Mertens (Universite Catholique de Louvain (UCL) (BE))
Delphes is a C++ framework, performing a fast multipurpose detector response simulation. The simulation includes a tracking system, embedded into a magnetic field, calorimeters and a muon system. The framework is interfaced to standard file formats and outputs observables such as isolated leptons, missing transverse energy and collection of jets which can be used for dedicated analyses. The...
66. A Neural Network z-Vertex Trigger for Belle II
Mrs Sara Neuhaus (TU München)
The Belle II experiment, the successor of the Belle experiment, will go into operation at the upgraded KEKB collider (SuperKEKB) in 2016. SuperKEKB is designed to deliver an instantaneous luminosity $\mathcal{L} = 8 \times 10^{35}\mathrm{cm}^{-2}\mathrm{s}^{-1}$, a factor of 40 larger than the previous KEKB world record. The Belle II experiment will therefore have to cope with a much larger...
39. HistFitter: a flexible framework for statistical data analysis
Mr Geert-Jan Besjes (Radboud Universiteit Nijmegen), Dr Jeanette Lorenz (Ludwig-Maximilians-Universitat Munchen)
We present a software framework for statistical data analysis, called *HistFitter*, that has been used extensively in the ATLAS Collaboration to analyze data of proton-proton collisions produced by the Large Hadron Collider at CERN. Most notably, HistFitter has become a de-facto standard in searches for supersymmetric particles since 2012, with some usage for Exotic and Higgs boson...
90. Clad - Automatic Differentiation Using Cling/Clang/LLVM
Vasil Georgiev Vasilev (CERN)
Differentiation is ubiquitous in high energy physics, for instance for minimization algorithms in fitting and statistical analysis, detector alignment and calibration, theory. Automatic differentiation (AD) avoids well-known limitations in round-offs and speed, which symbolic and numerical differentiation suffer from, by transforming the source code of functions. We will present how AD...
97. ROOT 6
Axel Naumann (CERN)
The recently published ROOT 6 is the first major ROOT release since nine years. It opens a whole world of possibilities with full C++ support at the prompt and a built-in just-in-time compiler, while staying almost completely backward compatible. The ROOT team has started to make use of these new features, offering for instance an improved new implementation of TFormula, fast and type-safe...
49. Identifying the Higgs boson with a Quantum Computer
Mr Alexander Mott (California Institute of Technology)
A novel technique to identify events with a Higgs boson decaying to two photons and reject background events using neural networks trained on a quantum annealer is presented. We use a training sample composed of simulated Higgs signal events produced through gluon fusion and decaying to two photons and one composed of simulated background events with Standard Model two-photon final states. We...
27. Clustering analysis for muon tomography data elaboration in the Muon Portal project
Marilena Bandieramonte (Dept. Of Physics and Astronomy, University of Catania and Astrophysical Observatory, Inaf Catania)
Clustering analysis is a set of multivariate data analysis techniques through which is possible to gather statistical data units, in order to minimize the "logical distance" within each group and to maximize the one between groups. The "logical distance" is quantified by measures of similarity/dissimilarity between defined statistical units. Clustering techniques are traditionally applied to...
6. Densities mixture unfolding for data obtained from detectors with finite resolution and limited acceptance
Prof. Nikolay Gagunashvili (University of Akureyri, Borgir, v/Nordurslod, IS-600 Akureyri, Iceland & Max-Planck-Institut f\"{u}r Kernphysik, P.O. Box 103980, 69029 Heidelberg, Germany)
A mixture density model-based procedure for correcting experimental data for distortions due to finite resolution and limited detector acceptance is presented. The unfolding problem is known to be an ill-posed problem that can not be solved without some a priori information about the solution such as, for example, smoothness or positivity. In the approach presented here the true distribution...
63. Geant4 developments in reproducibility, multi-threading and physics
John Apostolakis (CERN)
The Geant4 toolkit is used in the production detector simulations of most recent High Energy Physics experiments, and diverse applications in medical physics, radiation estimation for satellite electronics and other fields. We report on key improvements relevant to HEP applications that were provided in the most recent releases: 9.6 (Dec 2012) and 10.0 (Dec 2013). 'Strong' reproducibility...
62. The Run 2 ATLAS Analysis Event Data Model
Marcin Nowak (Brookhaven National Laboratory (US))
During the LHC's first Long Shutdown (LS1) ATLAS set out to establish a new analysis model, based on the experience gained during Run 1. A key component of this is a new Event Data Model (EDM), called the xAOD. This format, which is now in production, provides the following features: - A separation of the EDM into interface classes that the user code directly interacts with, and data...
14. GENFIT - a Generic Track-Fitting Toolkit
Johannes Rauch (T)
Genfit is an experiment-independent track-fitting toolkit, which combines fitting algorithms, track representations, and measurement geometries into a modular framework. We report on a significantly improved version of Genfit, based on experience gained in the Belle II, PANDA, and FOPI experiments. Improvements concern the implementation of additional track-fitting algorithms, enhanced...
34. An automated framework for hierarchical reconstruction of B mesons at the Belle II experiment
Christian Pulvermacher (KIT)
Belle II is an experiment being built at the $e^+e^-$ SuperKEKB B factory, and will record decays of a large number of $B \bar B$ pairs. This pairwise production of $B$ mesons allows analysts to use one correctly reconstructed $B$ meson to deduce the four-momentum and flavour of the other (signal-side) $B$ meson, without reconstructing any of its daughter particles. It also permits, in...
84. A novel robust and efficient algorithm for charge particle tracking in high background flux.
Mr Cristiano Fanelli (INFN Sezione di Roma, Universit\`a di Roma `La Sapienza', Roma, Italy)
A new tracker based on the GEM technology is under development for the upcoming experiments in Hall A at Jefferson Lab, where a longitudinally polarized electron beam of 11 GeV, combined with innovative polarized targets, will provide luminosity up to 10$^{39}$/(s cm$^{2}$) opening exciting opportunities to investigate unexplored aspects of the inner structure of the nucleon and the dynamics...
57. HERAFitter - an open source QCD fit framework
Andrey Sapronov (Joint Inst. for Nuclear Research (RU))
We present the HERAFitter project, a unique platform for QCD analyses of hadron-induced processes in the context of multi-process and multi-experiment setting. Based on the factorisable nature of the hadronic cross sections into universal parton distribution functions (PDFs) and process dependent partonic scattering cross sections, HERAFitter allows determination of the PDFs from...
79. Pidrix: Particle Identification Matrix Factorization
Dr Evan Sangaline (Michigan State University)
Probabilistically identifying particles and extracting particle yields are fundamentally important tasks required in a wide range of nuclear and high energy physics analyses. Quantities such as ionization energy loss, time of flight, and Čerenkov angle can be measured in order to help distinguish between different particle species, but distinguishing becomes difficult when there is no clear...
102. High-resolution deconvolution methods for analysis of low amplitude noisy gamma-ray spectra
Vladislav Matoušek (Institute of Physics, Slovak Academy of Sciences)
The deconvolution methods are very efficient and widely used tools to improve the resolution in the spectrometric data. They are of great importance mainly in the tasks connected with decomposition of low amplitude overlapped peaks (multiplets) in the presence of noise. In the talk we will present a set of deconvolution algorithms and a study of their decomposition capabilities from the...
101. Combination of multivariate discrimination methods in the measurement of the inclusive top pair production cross section
Jiri Franc (Czech Technical University in Prague)
The application of multivariate analysis techniques in experimental high energy physics have been accepted as one of the fundamental tools in the discrimination phase, when signal is rare and background dominates. The purpose of this study is to present new approaches to the variable selection based on phi-divergences, together with various statistical tests, and the combination of new...
94. Simulation Upgrades for the CMS experiment
David Lange (Lawrence Livermore Nat. Laboratory (US))
Over the past several years, the CMS experiment has made significant changes to its detector simulation application. The geometry has been generalized to include modifications being made to the CMS detector for 2015 operations, as well as model improvements to the simulation geometry of the current CMS detector and the implementation of a number of approved and possible future detector... | CommonCrawl |
In the staircase-shaped region below, all angles that look like right angles are right angles, and each of the eight congruent sides marked with a tick mark have length 1 foot. If the region has area 53 square feet, what is the number of feet in the perimeter of the region? [asy]
size(120);
draw((5,7)--(0,7)--(0,0)--(9,0)--(9,3)--(8,3)--(8,4)--(7,4)--(7,5)--(6,5)--(6,6)--(5,6)--cycle);
label("9 ft",(4.5,0),S);
draw((7.85,3.5)--(8.15,3.5)); draw((6.85,4.5)--(7.15,4.5)); draw((5.85,5.5)--(6.15,5.5)); draw((4.85,6.5)--(5.15,6.5));
draw((8.5,2.85)--(8.5,3.15)); draw((7.5,3.85)--(7.5,4.15)); draw((6.5,4.85)--(6.5,5.15)); draw((5.5,5.85)--(5.5,6.15));
[/asy]
We can look at the region as a rectangle with a smaller staircase-shaped region removed from its upper-right corner. We extend two of its sides to complete the rectangle: [asy]
size(120);
draw((5,7)--(0,7)--(0,0)--(9,0)--(9,3)--(8,3)--(8,4)--(7,4)--(7,5)--(6,5)--(6,6)--(5,6)--cycle);
draw((5,7)--(9,7)--(9,3),dashed);
[/asy] Dissecting the small staircase, we see it consists of ten 1 ft by 1 ft squares and thus has area 10 square feet. [asy]
size(120);
draw((5,7)--(0,7)--(0,0)--(9,0)--(9,3)--(8,3)--(8,4)--(7,4)--(7,5)--(6,5)--(6,6)--(5,6)--cycle);
draw((5,7)--(9,7)--(9,3),dashed);
draw((8,7)--(8,4)--(9,4),dashed); draw((7,7)--(7,5)--(9,5),dashed); draw((6,7)--(6,6)--(9,6),dashed);
[/asy] Let the height of the rectangle have length $x$ feet, so the area of the rectangle is $9x$ square feet. Thus we can write the area of the staircase-shaped region as $9x-10$. Setting this equal to $53$ and solving for $x$ yields $9x-10=53 \Rightarrow x=7$ feet.
Finally, the perimeter of the region is $7+9+3+5+8\cdot 1 = \boxed{32}$ feet. (Notice how this is equal to the perimeter of the rectangle -- if we shift each horizontal side with length 1 upwards and each vertical side with length 1 rightwards, we get a rectangle.) | Math Dataset |
\begin{document}
\bf
\centerline{Matching on a line}
\bf \centerline{Josef Bukac}
\rm \centerline{Bulharska 298} \centerline{55102 Jaromer-Josefov} \centerline{Czech Republic}
\bf \noindent Keywords: \rm bipartite, design of experiments, nonbipartite, $n$-tuple, triple, tripartite, quadripartite.
\bf \noindent MSC2010: \rm 90C27, 68Q17, 05C85.
\bf \noindent Abstract: \rm Matching is a method of the design of experiments. If we had an even number of patients and wanted to form pairs of patients such that their ages, for example, in each pair be as close as possible, we would use nonbipartite matching. Not only do we present a fast method to do this, we also extend our approach to triples, quadruples, etc.
In part 1 a matching algorithm uses $kn$ points on a line as vertices, pairs of vertices as edges, and either absolute values of differences or the squares of differences as weights or distances. It forms $n$ of $k$-tuples with the minimal sum of distances within each $k$-tuple in $O(n$ $log$ $n)$ time.
In part 2 we present a trivial algorithm for bipartite matching with absolute values or squares of differences as weights and a generalisation to tripartite matching on tripartite graphs.
\bf Introduction
\rm Further references about the use of nonbipartite matching in experimental design are in survey papers Beck (2015) or Lu (2011). Imagine we have something like 300 patients and we want to form pairs of patients such that the ages, as an example of a confounding variable, of the patients in each pair are as close as possible. Our goal is to make applications of treatment A and treatment B comparable. In our paper we also show how to form triples that are convenient for an application of placebo, treatment A, or treatment B. Quadruples may be used for an application of placebo, treatment A, treatment B, and interaction of A and B combined.
We picked age as an example of a trivial confounding variable but there are sofisticated ways of defining such a variable, propensity score being one of them.
Matching is used when we want to avoid the effect of a confounding variable. If there are more such variables, it is customary to agregate them to get just one variable typically called a scale or score. Applications vary in the fields of medicine, social sciences, psychology, or education.
Should we want to use an $n$-dimensional space for $n$ confounding variables we would have to multiply each of these variables by some constant to take care of their importance, units, etc, and we would have to derive those constants somehow only to find out that scores are a better choice.
The repeatability of matching is important because, unlike randomization, matching gives the same result each time it is repeated, save for ties.
In such a setting, each individual becomes a vertex of a simple complete graph and the weight of each edge is defined as the absolute value $A$ of the difference between their scores or it is defined as the square $S$ of this difference.
In part 1 we want to show that the calculation of nonbipartite matching becomes trivial. Also triangle matching, termed $3$-matching, becomes an easy task and so does $4$-matching, and generally $n$-matching so far for $n\le 16$ in the case of absolute values of differences as weights or $n\le 8$ when the sum of squares of differences is used .
We consider a complete simple graph $G$ with an even
number $|V|$ of vertices $V.$ The set of edges is denoted as $E.$ Let $M$ be a subset of $E.$ $M$ is called a matching if no edges in $M$ are adjacent in $G.$ A matching $M$ is called perfect if each vertex in $V$ is incident to some edge in $M.$ We assume a weight $w(e)\ge 0$ is attached to every edge $e\in E.$ We are then looking for a perfect matching $M$ for which the sum of weights is maximal. Equivalently, we may look for a perfect matching with a minimal sum of weights by picking some upper bound $u$ of weights and form new weights as $u-w(e)$ for each $e\in E.$
The Edmonds method for finding a maximal weighted matching in a general weighted graph is presented in Papadimitriou (1982).
The time necessary for calculation is polynomial, $O(|V|^3)$, in the number of vertices. Writing the program would be tedious but we recommend Beck (2015) or an internet address through which the solution may be obtained:
\noindent http://biostat.mc.vanderbilt.edu/wiki/Main/NonbipartiteMatching
Even though the running time is polynomial, the degree 3 may turn out to be too high for practical calculations for a large number of vertices $|V|$ requiring a large $|V|$ by $|V|$ matrix of distances.
If the number of vertices is divisible by three and a constant $B$ is given, the decision problem whether there is a perfect 3-matching such that the sum over all the triples of the distances between the three points in each triple is less than some constant $B$ is known to be $NP$-complete. We may refer to the problem exact cover by 3-sets in Garey (1979) or Papadimitriou (1982). That is why the problems we study seem to be so discouraging.
In the second part of the paper we show a trivial method of calculating a minimal perfect matching on a regular complete bipartite graph with weights on edges being absolute values of differences of weights on vertices. This is only a stepping stone to the design of a method of calcualtion of a minimal perfect matching on a regular complete tripartite graph with the same definition of weights.
\bf Part 1
\bf 1.1. Matching on a line
\rm We study a complete graph the vertices of which are points on a real $x$-axis. These points are denoted as $x_i$.
The edges are the line segments between these points. The weight of each edge $(x_i, x_j)$ is defined as the distance
of its two endpoint $|x_i-x_j|.$
Since the possibility of repeated observations is common in statistics, we do not use sets, we use the notion of a $k$-tuple. We call $(1, 1, 2)$ a triple whereas a set would consist of two elements, 1 and 2.
\bf \noindent Definition 1.1. \rm\quad We define a distance $A$ within a $k$-tuple as the sum of the distances of all the pairs formed of the elements of the $k$-tuple. $$A(x_1, x_2,\dots , x_k)=
\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}|x_j-x_i|.$$
\rm There are $k(k-1)/2$ summands in this formula. Our calculations will be simplified by the following.
\bf \noindent Definition 1.2. \rm A $k$-tuple $(x_1, x_2,\dots ,x_k)$ is sorted if $x_1\le x_2\le\dots\le x_k.$
\bf \noindent Theorem 1.1. \rm If the $k$-tuple is sorted, the distance within the $k$-tuple is $$A(x_1, x_2,\dots , x_k)=\sum_{i=1}^k (2i-k-1)x_i$$
\bf \noindent Proof.\rm\quad $$A(x_1, x_2,\dots , x_k) =\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}(x_j-x_i) =\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}x_j-\sum_{i=1}^{k-1}\sum_{j=i+1}^{k}x_i$$ $$=x_2+2x_3+\dots +(k-1)x_k-\sum_{i=1}^{k-1}(k-i)x_i =\sum_{i=1}^k (i-1)x_i-\sum_{i=1}^{k-1}(k-i)x_i$$
\noindent $\quad =\sum_{i=1}^k (2i-k-1)x_i.$
\rm For example, if a sorted pair $(x_1, x_2), $ $x_1\le x_2,$ is given, the distance is defined as $A(x_1, x_2)=x_2-x_1.$ For a sorted triple $(x_1, x_2, x_3),$ $x_1\le x_2\le x_3,$ we define the distance within as $A(x_1, x_2, x_3)=2(x_3-x_1).$
\bf \noindent Definition 1.3. \rm Let a $kn$-tuple be given. A partition of this $kn$-tuple into $n$ of $k$-tuples is called a $k$-tuple partition of a $kn$-tuple.
\bf \noindent Definition 1.4. \rm Let a $kn$-tuple be given. A $k$-tuple partition of this $kn$-tuple is called minimal if the sum of the distances within taken over all the $n$ of $k$-tuples is less than or equal to the sum of distances within taken over $k$-tuples of any other $k$-tuple partition.
\rm We may assume there may be more than one minimal partition. For a $kn$-tuple there are $(kn)!\big/(k!)^n$ $k$-tuple partitions from which we want to find those with minimal sum of distances within $k$-tuples. They are too many for the brute force method to work for a large $n.$ But it will work for $n=2$ if $k$ is small.
The greedy method will not work either. That can be shown by way of an example $(1, 3, 4, 5, 8, 9)$ in which the triple with the smallest sum of distances within is $(3, 4, 5),$ $A(3,4,5)=4$, the distance within the remaining items is $A(1,8,9)=16.$ The sum is $A(3,4,5)+A(1,8,9)=20.$ We get a smaller sum of distances within if we take $(1, 3, 4)$ and $(5, 8, 9)$ yielding $A(1,3,4)+$ $A(5,8,9)=$ $6+8=$ $14.$
First we want to show what a minimal partition for $2k$-tuples looks like. If we can do that, we will use induction to show it works for any $n>2.$
\bf \noindent Theorem 1.2. \rm Let a sorted $4$-tuple $(x_1, x_2, x_3, x_4)$ be given. Then the two pairs $(x_1, x_2)$ and $(x_3, x_4)$ have a minimal sum of distances defined as absolute values of differences.
\bf \noindent Proof.\rm\quad The sum of distances of $(x_1, x_2)$ and $(x_3, x_4)$ is $x_2-x_1+x_4-x_3.$ We form other possible partitions into pairs, calculate the sum of their distances, and compare them with the sum of distances of $(x_1, x_2)$ and $(x_3, x_4).$
Other possible sorted pairs are:
1) $(x_1, x_3)$ and $(x_2, x_4).$ The difference is $$A(x_1, x_3)+A(x_2, x_4)-(A(x_1, x_2)+A(x_3, x_4)) =2(x_3-x_2)\ge 0.$$
2) $(x_1, x_4)$ and $(x_2, x_3).$ The difference is $$A(x_1, x_4)+A(x_2, x_3)-(A(x_1, x_2)+A(x_3, x_4)) =2(x_3-x_2)\ge 0.$$
\bf \noindent Theorem 1.3. \rm Let a sorted $6$-tuple $(x_1, x_2,\dots x_6)$ be given. Then the two sorted triples $(x_1, x_2, x_3)$ and $(x_4, x_5, x_6)$ have a minimal sum of distances within defined as the sum of absolute values of differences.
\bf \noindent Proof.\rm\quad We form all the other possible triples, calculate the sum of their distances within, and subtract from them the sum of distances $A(x_1, x_2, x_3 )$ $+A(x_4, x_5, x_6).$
Other possible triples are listed in such a way that $x_1$ appears in the first triple because the order does not matter in this case:
\noindent 1)\quad $(x_1, x_2, x_4)$ and $(x_3, x_5, x_6)$
\noindent $A(x_1, x_2, x_4)+A(x_3, x_5, x_6)-A(x_1, x_2, x_3)-A(x_4, x_5, x_6)=$
$4(x_4-x_3)\ge 0$
\noindent 2)\quad $(x_1, x_2, x_5)$ and $(x_3, x_4, x_6)$
\noindent $A(x_1, x_2, x_5)+A(x_3, x_4, x_6)-A(x_1, x_2, x_3)-A(x_4, x_5, x_6)=$
$2(x_5+x_4-2x_3)\ge 0$
\noindent 3)\quad $(x_1, x_2, x_6)$ and $(x_3, x_4, x_5)$
\noindent $A(x_1, x_2, x_6)+A(x_3, x_4, x_5)-A(x_1, x_2, x_3)-A(x_4, x_5, x_6)=$
$2(x_5+x_4-2x_3)\ge 0$
\noindent 4)\quad $(x_1, x_3, x_4)$ and $(x_2, x_5, x_6)$
\noindent $A(x_1, x_3, x_4)+A(x_2, x_5, x_6)-A(x_1, x_2, x_3)-A(x_4, x_5, x_6)=$
$2(2x_4-x_3-x_2)\ge 0$
\noindent 5)\quad $(x_1, x_3, x_5)$ and $(x_2, x_4, x_6)$
\noindent $A(x_1, x_3, x_5)+A(x_2, x_4, x_6)-A(x_1, x_2, x_3)-A(x_4, x_5, x_6)=$
$2(x_5+x_4-x_3-x_2)\ge 0$
\noindent 6)\quad $(x_1, x_3, x_6)$ and $(x_2, x_4, x_5)$
\noindent $A(x_1, x_3, x_6)+A(x_2, x_4, x_5)-A(x_1, x_2, x_3)-A(x_4, x_5, x_6)=$
$2(x_5+x_4-x_3-x_2)\ge 0$
\noindent 7)\quad $(x_1, x_4, x_5)$ and $(x_2, x_3, x_6)$
\noindent $A(x_1, x_4, x_5)+A(x_2, x_3, x_6)-A(x_1, x_2, x_3)-A(x_4, x_5, x_6)=$
$2(x_5+x_4-x_3-x_2)\ge 0$
\noindent 8)\quad $(x_1, x_4, x_6)$ and $(x_2, x_3, x_5)$
\noindent $A(x_1, x_4, x_6)+A(x_2, x_3, x_5)-A(x_1, x_2, x_3)-A(x_4, x_5, x_6)=$
$2(x_5+x_4-x_3-x_2)\ge 0$
\noindent 9)\quad $(x_1, x_5, x_6)$ and $(x_2, x_3, x_4)$
\noindent $A(x_1, x_5, x_6)+A(x_2, x_3, x_4)-A(x_1, x_2, x_3)-A(x_4, x_5, x_6)=$
$2(2x_4-x_3-x_2)\ge 0$
That finishes the proof. It was actually generated by a comuter.
\rm We may consider a pair of $k$-tuples for any $k\ge 2$ but we have to generate all such pairs of $k$-tuples while keeping $x_1$ in the first of them. It means the number of all such pairs is ${2k-1}\choose {k-1}$. We do not need any symbolic algebra to do this for it will suffice to keep in mind that the $k$-tuples are generated as combinations represented as subscripts. The sums of the two distances within each $k$-tuple are expressed as coefficients assigned to subscripts. The resulting inequality is obtained by comparing the coefficients as indicated in Theorems 1.2 and 1.3.
Not only does the amount of work on each $k$-tuple increase approximately proportionally with respect to $k,$ but, and more importantly, the growth of binomial coefficients ${2k-1}\choose {k-1}$ becomes prohibitive for calculations as $k$ increases. When we have the time of calculations for $k,$ the time necessary for $k+1$ will be approximately equal to the time for $k$ times the following
$$\frac{k+1}{k}{{2(k+1)-1}\choose{(k+1)-1)}}\big/ {{2k-1}\choose{k-1)}}=4+\frac{2}{k}$$ This is the reason why we have been able to verify our claims so far for $k\le 16.$ We stopped at 16 also because it can be used to form $4$ by $4$ tables.
\bf \noindent Definition 1.5. \rm Let $(x_1,x_2,\dots ,x_k)$ and $(y_1,y_2,\dots ,y_k)$ be two distinct sorted $k$-tuples. The smallest subscript $j$ for which $x_j\ne y_j$ is called the smallest subscript of discordance.
We note that if $(x_1,x_2,\dots ,x_k)=(y_1,y_2,\dots ,y_k),$ the smallest subscript of discordance is not defined.
\bf \noindent Theorem 1.4. \rm If for any sorted $2k$-tuple $(x_1, x_2,\dots , x_{2k})$ the two sorted $k$-tuples $(x_1, x_2,\dots , x_{k})$ and $(x_{k+1}, x_{k+2},\dots , x_{2k})$ are the minimal solution of the $k$-matching problem, then the minimal solution of the $k$-matching problem for a sorted $kn$-tuple, $n>0,$ is given by $n$ sorted $k$-tuples $$(x_{(i-1)k+1}, x_{(i-1)k+2},\dots , x_{(i-1)k+k})$$ for $i=1,\dots , n.$
\bf \noindent Proof. \rm\quad We prove the theorem by induction. If $n=1,$ the theorem is obvious. If $n=2,$ the theorem follows directly from its assumption. We assume the theorem is true if $n-1>1$ and show it is true for $n.$
We consider all the possible minimal $k$-tuple partitions. If there is an $k$-tuple partition such that for some sorted $k$-tuple $(y_1,y_2,\dots ,y_k)$ we have $x_i=y_i$ for all $i=1, 2,\dots ,k,$ we are done and we may also exclude the case that the smallest subscript of discordance is not defined in the following.
If $(x_1,x_2,\dots ,x_k)\ne (y_1,y_2,\dots ,y_k),$ we will show a contradiction. We will compare $k$-tuples with $(x_1, x_2, \dots , x_k).$ Out of all the minimal $k$-tuple partitions we pick the one containing the $k$-tuple $(y_1, y_2, \dots , y_k)$ for which the smallest subscript of discordance $j$ is the highest. It is obvious that for such a $k$-tuple $y_1=x_1$ holds for otherwise the smallest subscript of discordance $j$ would be $1.$ If $y_1=x_1,$ we have $j>1.$
Since $j,$ where $1<j\le k,$ is the lowest subscript for which $y_j\ne x_j,$ this $x_j$ must be in some other $k$-tuple $(z_1, z_2, \dots , z_k)$ in the partition. We concatanate these two $k$-tuples to obtain a $2k$-tuple $(y_1, y_2, \dots , y_k, z_1, z_2, \dots , z_k)$ and apply the assumption of the theorem to obtain a minimal solution of the $k$-matching problem on this $2k$-tuple as $(t_1, t_2,\dots , t_j,\dots , t_k )$ and $(u_1, u_2,\dots , u_k )$ where $t_i=x_i$ for $i=1, 2, \dots , j.$
We have two cases. We obtain a $k$-tuple partition containing $(x_1,x_2, \dots ,x_k)$ which is a contradiction.
If we do not obtain a partition containing $(x_1,x_2, \dots ,x_k),$ the smallest subscript of discordance is $i,$ where $j<i<k,$ when $(t_1, t_2,\dots , t_k)$ is compared with $(x_1, x_2,\dots , x_k).$ We have obtained a $k$-tuple partition for which the smallest subscript of discordance is higher than $j,$ contradicting the assumption.
The proof is finished by removing $x_1,x_2,\dots ,x_k$ from the original $kn$-tuple obtaining a $(n-1)k$-tuple.
\bf \noindent Corollary. \rm If the assumption in theorem 1.4 holds, then the minimal solution of the $k$-matching problem for a not necessarily sorted $kn$-tuple, $n>0,$ is obtained in the running time necessary for sorting the $kn$-tuple.
\rm It means the matching problem is solved in $O(N\log N)$ time where $N=kn$ is the number of items to be matched.
\bf 1.2 Sum of squares of differences
\rm We all know that statisticians would prefer the sum of squares of all differences to evaluate the distance within a $k$-tuple. Let $(x_1, x_2,\dots , x_k)$ be a $k$-tuple the distance within will be defined as $$S(x_1, x_2,\dots , x_k)= \sum_{i=1}^{N-1}\sum_{j=i+1}^N(x_j-x_i)^2$$ Some avid statisticians would even require the minimization of the sum of variances but this is equivalent to the sum of squares of all the differences as explained in the appendix.
First we want to show what a minimal partition for $2k$-tuples looks like.
\bf \noindent Theorem 1.5. \rm Let a sorted $4$-tuple $(x_1, x_2, x_3, x_4)$ be given. Then the two pairs $(x_1, x_2)$ and $(x_3, x_4)$ have a minimal sum of squares of all differences.
\bf \noindent Proof.\rm\quad The sum of distances of $(x_1, x_2)$ and $(x_3, x_4)$ is $S(x_1, x_2)+S(x_3, x_4)=(x_2-x_1)^2+(x_4-x_3)^2.$ We form other possible partitions into pairs, calculate the sum of the differences squared, and compare them with the sum of distances $S(x_1, x_2)+S(x_3, x_4).$ Other possible sorted pairs are:
\noindent 1) $(x_1, x_3)$ and $(x_2, x_4).$ The difference is
$S(x_1, x_3)+S(x_2, x_4)-(S(x_1, x_2)+S(x_3, x_4)) =2(x_4-x_1)(x_3-x_2)\ge 0.$
\noindent 2) $(x_1, x_4)$ and $(x_2, x_3).$ The difference is
$S(x_1, x_4)+S(x_2, x_3)-(S(x_1, x_2)+S(x_3, x_4)) =2(x_3-x_1)(x_4-x_2)\ge 0.$
\bf \noindent Theorem 1.6. \rm Let a sorted $6$-tuple $(x_1, x_2,\dots x_6)$ be given. Then the two sorted triples $(x_1, x_2, x_3)$ and $(x_4, x_5, x_6)$ have a minimal sum of squares of all differences.
\bf \noindent Proof.\rm\quad We form all the other possible triples, calculate the sum of their distances, and subtract from them the sum of distances $S(x_1, x_2, x_3 )+S(x_4, x_5, x_6).$
Other possible triples are listed in such a way that $x_1$ appears in the first triple because the order does not matter in this case:
\noindent 1)$S(x_1, x_2, x_4)+S(x_3, x_5, x_6)-S(x_1, x_2, x_3)-S(x_4, x_5, x_6)=$
$2(x_4-x_3)(x_6+x_5-x_2-x_1)\ge 0$
\noindent 2)$S(x_1, x_2, x_5)+S(x_3, x_4, x_6)-S(x_1, x_2, x_3)-S(x_4, x_5, x_6)=$
$2(x_3-x_5)(x_1+x_2-x_4-x_6)\ge 0$
\noindent 3)$S(x_1, x_2, x_6)+S(x_3, x_4, x_5)-S(x_1, x_2, x_3)-S(x_4, x_5, x_6)=$
$2(x_6-x_3)(x_5+x_4-x_2-x_1)\ge 0$
\noindent 4)$S(x_1, x_3, x_4)+S(x_2, x_5, x_6)-S(x_1, x_2, x_3)-S(x_4, x_5, x_6)=$
$2(x_4-x_2)(x_6+x_5-x_3-x_1)\ge 0$
\noindent 5)$S(x_1, x_3, x_5)+S(x_2, x_4, x_6)-S(x_1, x_2, x_3)-S(x_4, x_5, x_6)=$
$2(x_5-x_2)(x_6+x_4-x_3-x_1)\ge 0$
\noindent 6)$S(x_1, x_3, x_6)+S(x_2, x_4, x_5)-S(x_1, x_2, x_3)-S(x_4, x_5, x_6)=$
$2(x_6-x_2)(x_5+x_4-x_3-x_1)\ge 0$
\noindent 7)$S(x_1, x_4, x_5)+S(x_2, x_3, x_6)-S(x_1, x_2, x_3)-S(x_4, x_5, x_6)=$
$2(x_6-x_1)(x_5+x_4-x_3-x_2)\ge 0$
\noindent 8)$S(x_1, x_4, x_6)+S(x_2, x_3, x_5)-S(x_1, x_2, x_3)-S(x_4, x_5, x_6)=$
$2(x_5-x_1)(x_6+x_4-x_3-x_2)\ge 0$
\noindent 9)$S(x_1, x_5, x_6)+S(x_2, x_3, x_4)-S(x_1, x_2, x_3)-S(x_4, x_5, x_6)=$
$2(x_4-x_1)(x_6+x_5-x_3-x_2)\ge 0$
That finishes the proof.
It is interesting to see that the factorization of all the quadratic forms could be done. We actually wrote a program that checked the faktorization for $2\le k\le 8$ and verified the nonnegativity of each factor.
The final step is the use of theorem 1.4 to show how to calculate $k$-matching for $2\le k\le 8.$ Now we see that it does not matter which of the two mentioned distances within, either the sum of absolute values of all the differences or the sum of squares of all the differences, we use, we get the same $k$-matching.
\bf 1.3 Statistical applications
\rm The $k$-tuples obtained by our algorithm are sorted. That could have an unpleasant effect on statistical procedures because of the inequality of the means of the first entries of the $k$-tuples as compared with the means of the last entries of the $k$-tuples. We know they are different, as long as $x_1,x_2,\dots ,x_{kn}$ are not all equal.
We want to avoid randomization in the spirit of our paper. What we are trying to achieve is the rearrangement of the items in $k$-tuples in such a way that all the means over the $i$-th items, $i=1,\dots, k,$ are as close as possible. The minimization process reminds us of an NP-complete optimization partition problem even though typically the numbers $x_1,x_2,\dots ,x_{kn}$ are not integers.
Even though partitioning is beyond the scope of this paper, one heuristic way of handling the problem is sorting the $k$-tuples with respect to the distances within in a discending order and keeping track of the subtotals starting from the first $k$-tuple and rearranging each consecutive $k$-tuple to keep the differences as small as possible at each step. This approach obviously does not guarantee we obtain the smallest possible differences among the means.
\bf Part 2
\bf 2.1 Bipartite graphs
\rm Even though algoritms for finding optimal bipartite matching are so well known that they are presented in introductory textbooks, such as Bondy(1976), we present another approach because it will find applications in tripartite matching.
The regular complete bipartite graph consists of two disjoint vertex sets $A$ and $B,$ $|A|=|B|=n,$ and edges $A\times B.$ We assume that to each of the vertices in $A$ and in $B$ respectively a real numbers $x_i$ and $y_i$ are assigned as their values. The weight associated with each edge $(a_i, b_j)$ is defined as
either $w_{abs}(a_i, b_j)=|x_i-y_j|$ for all $1\le i, j\le n$ or $w_{sq}(a_i, b_j)=(x_i-y_j)^2$ for all $1\le i, j\le n.$
\bf \noindent Definition 2.1. \rm A perfect matching in a regular complete bipartite graph with vertex set $A\cup B$ is a subset of edges $M_{A,B}$ such that each vertex of $A$ is connected by an edge to one vertex of $B$ and each vertex of $B$ is connected to one vertex of $A.$
We use the notation $M_{A,B}$ to indicate that we are dealing with the vertex set $A\cup B.$
\bf \noindent Definition 2.2. \rm The weight of a perfect matching is $$w(M_{A,B})=\sum_{(a,b)\in M_{A,B}} w(a,b)$$
\bf \noindent Definition 2.3. \rm A perfect matching $M_{A,B}^{min}$ is minimal if its weight is less than or equal to that of any other perfect matching, $w(M_{A,B}^{min})\le w(M_{A,B}).$
To avoid any trouble, we mention we do not make any distinction between the vertices and the values they are assigned, we again consider $n$-tuples of real numbers. Sorted $n$-tuples are described in definition 1.2.
\bf \noindent Theorem 2.1. \rm Let two sorted $n$-tuples, $(x_1, x_2, \dots x_n)$ and
$(y_1, y_2, \dots y_n)$ be given. If the weight of each edge is defined as the absolute value of the difference between $x_i$ and $y_j,$ $w_{abs}(a_i,b_j)=|x_i-y_j|,$ for each $1\le i, j\le n,$ then the minimal perfect matching consists of edges with values $(x_1, y_1),$ $(x_2, y_2),$ $\dots ,$ $(x_n, y_n).$
\bf \noindent Proof. \rm\quad The theorem is true for $n=1.$ If $n>1,$ we assume it is true for $n-1.$ In a minimal perfect matching there is an edge with one endpoint value $x_1.$ If the other endpoint value of this edge is $y_1,$ we are done. If not, the other endpoint value is $y_j$ for some $j>1.$ Another edge must have $y_1$ as its endpoint value, this edge has some $x_i,$ $i>1$ as its other endpoint value.
Let the required $x_1, x_i, y_1, y_j$ be given We first consider the three cases when $x_1$ is less than the rest of the points, $x_i,$ $y_1,$ $y_j.$ Three cases are listed depending on the position of $x_i.$
Two options are possible in each case. The one containing $x_1, y_1$ is subtracted from the other one.
1. Let $x_1\le x_i\le y_1\le y_j.$
We subtract $|x_1-y_1|+|x_i-y_j|=$ $y_1-x_1+y_j-x_i$ from
$|x_1-y_j|+|x_i-y_1|=$ $y_j-x_1+y_1-x_i.$ The result is zero because we subtract the same expression.
2. Let $x_1\le y_1\le x_i\le y_j.$ Then
$|x_1-y_1|+|x_i-y_j|=$ $y_1-x_1+y_j-x_i$ is subtracted from
$|x_1-y_j|+|x_i-y_1|=$ $y_j-x_1+x_i-y_1,$ the difference is $y_j-x_1+x_i-y_1-(y_1-x_1+y_j-x_i)=$ $2x_i-2y_1=$ $2(x_i-y_1)\ge 0.$
3. Let $x_1\le y_1\le y_j\le x_i.$
Then $|x_1-y_1|+|x_i-y_j|=$ $y_1-x_1+x_i-y_1$ is subtracted from
$|x_1-y_j|+|x_i-y_1|=$ $y_j-x_1+x_i-y_1,$ the difference is $y_j-x_1+x_i-y_1-$ $(y_1-x_1+x_i-y_1)=$ $y_j-x_1+x_i-y_1-y_1+x_1-x_i+y_1=$ $y_j-y_1\ge 0.$
In the case that $y_1$ is the smallest number we just swap $x$'s with $y$'s.
We conclude that the minimal matching contains an edge with endpoint values $(x_1, y_1)$ and induction makes sense.
\bf \noindent Theorem 2.2. \rm Let two sorted $n$-tuples, $(x_1, x_2, \dots x_n)$ and $(y_1, y_2, \dots y_n)$ be given. If the weight of each edge is defined as $w_{sq}=(x_i-y_j)^2,$ for each $1\ge i, j\le n,$ then the minimal perfect matching consists of edges with values $(x_1, y_1),$ $(x_2, y_2),$ $\dots ,$ $(x_n, y_n).$
\bf \noindent Proof. \rm\quad The theorem is true for $n=1.$ If $n>1,$ we assume it is true for $n-1.$ In a minimal perfect matching there is an edge with one endpoint value $x_1.$ If the other endpoint value of this edge is $y_1,$ we are done. If not, the other endpoint value is $y_j$ for some $j>1.$ Another edge must have $y_1$ as its endpoint value, this edge has some $x_i,$ $i>1$ as its other endpoint value.
We subtract $(x_1-y_1)^2+(x_i-y_j)^2=x_1^2+y_1^2-2x_1y_1+x_i^2+y_j^2-2x_iy_j$ from $(x_1-y_j)^2+(x_i-y_1)^2=x_1^2+y_j^2-2x_1y_j+y_1^2+x_i^2-2x_iy_1.$ The difference is $-2x_1y_j-2x_iy_1+2x_1y_1+2x_iy_j=2(x_i-x_1)(y_j-y_1)\ge 0.$
The property described in theorems 2.1 or 2.2 not only allows us to calculate minimal matching quickly, it will be used in the construction of tripartite matching. The following definition will allow us to formulate the results in a bit more general but simple setting.
\bf \noindent Definition 2.4. \rm\quad Let two sorted $n$-tuples, $(x_1, x_2, \dots x_n)$ and $(y_1, y_2, \dots y_n)$ be given. Let a weight of each edge be defined as $w(x_i,y_j),$ for each $1\ge i, j\le n,$ if the minimal perfect matching consists of edges with values $(x_1, y_1),$ $(x_2, y_2),$ $\dots ,$ $(x_n, y_n)$ for any $(x_1, x_2, \dots x_n)$ and $(y_1, y_2, \dots y_n),$ then the weight $w$ is called line matching or LM.
Counterexample: Let a weight be defined as a product $w_p(x_i, y_j)=x_iy_j.$ If we consider $x=(1, 2, 3)$ and $y=(1, 2, 3),$ as an example, then the sum of products is $1*1+2*2+3*3=14.$ If we use the reverse order $z=(3,2,1),$ then the sum of products is $1*3+2*2+3*1=10<14.$ As a result we can say that the weight $w_p$ defined as a product is not LM.
We will not study which weights are LM and which are not. It suffices to see that the weights $w_{abs}$ and $w_{sq}$ are the ones with LM property and those are the ones that would be used in practice.
\bf 2.2 Tripartite graphs
\rm A regular complete tripartite graph is the union of three disjoint vertex sets $A,$ $B,$ and $C,$ for which $|A|=|B|=|C|=n,$ and edges in $A\times B,$ $B\times C,$ and $C\times A.$ For any $a\in A,$ $b\in B,$ and $c\in C$ the edges are denoted as $(a,b),$ $(b,c),$ and $(c,a)$ respectively.
We define a matching $M_{A,B,C}\subset A\times B\times C$ as a set of triples of vertices in $M_{A,B,C}$ such that if for any two distinct triples $(a_{i_1}, b_{j_1}, b_{k_1})\in M_{A,B,C}$ and $(a_{i_2}, b_{j_2}, c_{k_2})\in M_{A,B,C},$ we have $a_{i_1}\ne a_{i_2},$ $b_{j_1} \ne b_{j_2},$ and $c_{k_1}\ne c_{k_2}.$
\bf \noindent Definition 2.5.
\rm\quad A matching $M_{A,B,C}$ is called perfect if the number of triples in $M_{A,B,C}$ is $n=|A|=|B|=|C|.$
We assume a nonnegative weight of each of the edges is defined for each edge $w(a_i,b_j),$ $w(b_i,c_j),$ and $w(c_i,a_j)$ for any $1\le i\le n$ and $1\le j\le n.$
\bf \noindent Definition 2.6. \rm\quad If a perfect matching $M_{A,B,C}$ is given, we define its weight $w(M_{A,B,C})$ as $$w(M_{A,B,C})=\sum_{(a,b,c)\in M_{A,B,C}}\big( w(a,b)+w(b,c)+w(c,a)\big).$$
This definition is in accordance with Definition 1.1 where all the weights of edges in a complete graph with vertices $a, b,$ and $c$ are included in the sum.
\bf \noindent Definition 2.7. \rm\quad A perfect matching $M_{A,B,C}^{min}$ is called minimal if its weight is minimal, that is, $w(M_{A,B,C}^{min})\le w(M_{A,B,C})$ for any other perfect matching $M_{A,B,C}.$
\bf \noindent Theorem 2.3. \rm
Let $A,$ $B,$ and $C$ be the vertex sets of the same cardinality of a complete tripartite graph $A\cup B\cup C$ with edges in $A\times B,$ $B\times C,$ and $C\times A.$ Then $$w(M_{A,B}^{min})+w(M_{B,C}^{min})+w(M_{C,A}^{min})\le w(M_{A,B,C}^{min}) .$$
\bf \noindent Proof. \rm\quad We check that $$w(M_{A,B}^{min})\le \sum_{(a,b,c)\in M_{A,B,C}^{min}}w(a,b),$$ $$w(M_{B,C}^{min})\le \sum_{(a,b,c)\in M_{A,B,C}^{min}}w(b,c),$$ $$w(M_{C,A}^{min})\le \sum_{(a,b,c)\in M_{A,B,C}^{min}}w(c,a).$$ Due to definitions 2.2 through 2.7 the sum of these three inequalities yields the result.
1) An application of this theorem in a general setting like this may be found in estimating the accuracy of some heuristic for finding a perfect matching. If we obtain a perfect matching $M_{A,B,C}^{heu}$ in a complete tripartite graph, we may use this theorem 2.2 to estimate the accuracy of $M_{A,B,C}^{heu}$ as $$\frac{w(M_{A,B,C}^{heu})}{w(M_{A,B,C}^{min})}\le \frac{w(M_{A,B,C}^{heu})}{w(M_{A,B}^{min})+w(M_{B,C}^{min})+w(M_{C,A}^{min})}.$$
2) If an inequality in this theorem 2.3 is satisfied as an equality for some perfect matching $M_{A,B,C}$, that is, $w(M_{A,B,C})=w(M_{A,B}^{min})+w(M_{B,C}^{min})+w(M_{C,A}^{min}),$ we have a minimal solution.
3) The technique of the proof of theorem 2.2 may be used in other situations, such as $4$-partite matching or $k$-partite matching.
\bf 2.3 Minimal matching on tripartite graphs
\rm Let $A,$ $B,$ and $C$ be the vertex sets of the same number of vertices. We form a complete tripartite graph $A\cup B\cup C$ with edges in $A\times B,$ $B\times C,$ and $C\times A.$
We assume that to each of the vertices in $A,$ $B,$ and $C$ real numbers $x_i,$ $y_i,$ and $z_i$ are assigned respectively. A weight of each of the edges is defined as
$w(a_i,b_j)=|x_i-y_j|,$ $w(b_i,c_j)=|y_i-z_j|,$ and $w(c_i,a_j)=|z_i-x_j|$ for any $1\le i\le n$ and $1\le j\le n.$ Another way to define the weights is $w(a_i,b_j)=(x_i-y_j)^2,$ $w(b_i,c_j)=(y_i-z_j)^2,$ and $w(c_i,a_j)=(z_i-x_j)^2.$ In general the weight has to have property LM. Without any loss of generality we assume the $n$-tuples $(x_1, x_2,\dots , x_n),$ $(y_1, y_2,\dots , y_n),$ and $(z_1, z_2,\dots , z_n),$ are sorted. If not, we sort them together with $a_i,$ $b_j,$ and $c_k.$ Obtaining sorted $n$-tuples can be done in $O(n\log n)$ time.
\bf \noindent Theorem 2.4. \rm
Let $A,$ $B,$ and $C$ be the vertex sets, $|A|=$ $|B|=$ $|C|=n,$ of a complete tripartite graph $A\cup B\cup C$ with edges in $A\times B,$ $B\times C,$ and $C\times A.$ If the vertices are assigned real values corresponding to sorted $n$-tuples $(x_1, x_2,\dots , x_n),$ $(y_1, y_2,\dots , y_n),$ and $(z_1, z_2,\dots , z_n),$ then the minimal matching, with respect to weights with property LM, is given by $(a_1,b_1,c_1),$ $(a_2,b_2,c_2),$ $\dots ,$ $(a_n,b_n,c_n).$
\bf \noindent Proof. \rm\quad We claim the matching $(a_1, b_1, c_1),$ $(a_2, b_2, c_2),$ $\dots ,$ $(a_n, b_n, c_n)$ is the minimal one. When we form $w(x_i, y_i)+w(y_i, z_i)+w(z_i, x_i)$ for each triple separately and add them up over $i$, we get the same sum as when we calculate $\sum_{i=1}^n w(x_i, y_i)+$ $\sum_{i=1}^n w(y_i, z_i)+$ $\sum_{i=1}^n w(z_i, x_i).$
It shows we get an equality sign in the inequality in theorem 2.3 which, in turn, means that we have obtained a minimal matching.
We would proceed in the same way in the case of weights defined as squares of differences.
We recall that $|x_i-y_i|+|y_i-z_i|+|z_i-x_i|$ is the distance within this triple $D(x_i, y_i, z_i)$ introduced in definition 1.1.
\bf Conclusion
\rm Results in part 1 may be used as a starting value for finding an $n$-matching in a Euclidean space. We may fit a line to data to provide a starting $n$-tuple partition followed by a local search. One way to do the local search is the concatanation of pairs of $n$-tuples to obtain $2n$-tuples and enumeration of all the pairs of $n$-tuples. One element may be fixed so that we have ${2n-1}\choose{n-1}$ to generate.
The matching algorithm on a line may provide a test for a general heuristic algorithm for if a general matching heuristic works, it should work on a line. A simple heuristic may be designed if the vertices are points in a Euclidean space, edges are the line segments connecting the vertices, and weights are the distances between the end points of those line segments. If the number of vertices is $2^n3,$ we find the nonbipartite $2$-matching that minimizes the sum of the lengths of line segments. There are $2^{n-1}3$ line segments in this matching. We form a new graph by taking midpoints of the line segments in the matching keeping track of what original vertices the line segments came from. We repeat this proces until we get three vertices. Now we work our way back forming a graph with six vertices and find optimal triples by enumerating all the pairs of triples of vertices. We continue until we get a graph with $2^n3$ vertices. When we use this algorithm on vertices on a line, we see it gives the correct resullt.
In part 2 of the paper theorem 2.2 may be used in the case that the weights assigned to edges of a tripartite graph satisfy the triangle inequality. Let $(a_i, b_j, c_k)$ be given $a_i\in A,$ $b_j\in B,$ $c_k\in C,$ where $A,B,C$ are disjoint,
$|A|=|B|=|C|=n,$ then $w(c_k,a_i)\le w(a_i,b_j)+w(b_j,c_k).$ Let $(a_i, b_j)\in M_{A,B}^{min}$ and $(b_j, c_k)\in M_{B,C}^{min},$ then $(c_k, a_i)$ does not have to be in $M_{C,A}^{min}.$ Matching on a tripartite graph actually asks for 3-cycles $a_i,b_j,c_k,a_i.$
Actually without knowing or caring what the weights of $(c_k,a_i)$ are, the use of the triangle inequality gives us an upper bound, $w(c_k,a_i)\le$ $w(a_i,b_j)+w(b_j,c_k).$ Thus, if we form a matching like this, denoted as $M_{A,B,C}^{\triangle},$ we have $$M_{A,B,C}^{\triangle}\le 2\big(\sum_{(a,b)\in M_{A,B}^{min}}w(a,b)+\sum_{(b,c)\in M_{B,C}^{min}}w(b,c) \big) =2\big(w(M_{A,B}^{min})+w(M_{B,C}^{min})\big)$$ Thus $$\frac{w(M_{A,B,C}^{\triangle})}{w(M_{A,B,C}^{min})}\le \frac{w(M_{A,B,C}^{\triangle})}{w(M_{A,B}^{min})+w(M_{B,C}^{min})+w(M_{C,A}^{min})}\le$$
$$\frac{2\big(w(M_{A,B}^{min})+w(M_{B,C}^{min})\big)} {w(M_{A,B}^{min})+w(M_{B,C}^{min})+w(M_{C,A}^{min})}\le \frac{2\big(w(M_{A,B}^{min})+w(M_{B,C}^{min})\big)} {w(M_{A,B}^{min})+w(M_{B,C}^{min})}=2.$$
\bf Appendix
\rm We may try to define a measure of variability in a way different from the usual approach. We assume there are $N$ real numbers $x_1, x_2,\dots , x_N.$ The usual measure of variability, the variance $S^2,$ is based on the sum of squares of differences from the mean, $$S^2=\frac{1}{N-1}\sum_{i=1}^N(x_i-\bar{x})^2.$$
The way we will define the measure of variability without any
reference to the mean is based on the sum of squares of
all the differences
$$\sum_{i=1}^{N-1}\sum_{j=i+1}^N(x_j-x_i)^2.$$ We may check what happens if $y_i=a+bx_i.$
$$\sum_{i=1}^{N-1}\sum_{j=i+1}^N(y_j-y_i)^2= \sum_{i=1}^{N-1}\sum_{j=i+1}^N(a+bx_j-a-bx_i)^2= b^2\sum_{i=1}^{N-1}\sum_{j=i+1}^N(x_j-x_i)^2.$$ It means we have the same property for the sum of all the differences squared and the sum of differences from the mean squared and it means it is a reasonable characteristic of variability.
Before we show what relation there is between the sum of squares of all differences and the sum of squares of differences from the mean we write the sum of squares of all differences as $$2\sum_{i=1}^{N-1}\sum_{j=i+1}^N(x_j-x_i)^2= \sum_{i=1}^{N}\sum_{j=1}^N(x_j-x_i)^2.$$
This is easy to see when we write the difference $(x_j-x_i)$ in a different order as $(x_i-x_j).$ When the subscripts are the same, we get $x_i-x_i=0.$
Now we review the formula for $(a+b)^2.$ We usually say that $(a+b)^2=a^2+2ab+b^2$ because we use commutativity $ab=ba$ therefore $ab+ba=2ab.$ When we don't, we get $(a+b)^2=aa+ab+ba+bb.$ We will use this idea as
$$(\sum_{j=1}^Nx_j)^2=\sum_{i=1}^N\sum_{j=1}^Nx_ix_j.$$
\bf \noindent Theorem \rm Let $N>1$ and real numbers $x_1, x_2, \dots , x_N$ be given. Then $$\sum_{i=1}^N\sum_{j=1}^N(x_j-x_i)^2= 2N\sum_{i=1}^N(x_i-\bar{x})^2.$$
\bf \noindent Proof. \rm We expand the formula for twice the sum of squares of all differences
$$\sum_{i=1}^N\sum_{j=1}^N(x_j-x_i)^2= \sum_{i=1}^N\sum_{j=1}^N(x_j^2+x_i^2-2x_ix_j)=$$ $$\sum_{i=1}^N\sum_{j=1}^Nx_i^2+ \sum_{i=1}^N\sum_{j=1}^Nx_j^2- 2\sum_{i=1}^N\sum_{j=1}^Nx_ix_j= 2N\sum_{i=1}^Nx_i^2- 2\sum_{i=1}^N\sum_{j=1}^Nx_ix_j.$$
We multiply the formula for the sum of squares of differences from the mean by two
$$2N\sum_{i=1}^N(x_i-\frac{1}{N}\sum_{j=1}^Nx_j)^2= 2N\sum_{i=1}^Nx_i^2-4\sum_{i=1}^Nx_i\sum_{j=1}^Nx_j +\frac{2}{N}\sum_{i=1}^N(\sum_{j=1}^Nx_j)^2=$$ $$2N\sum_{i=1}^Nx_i^2-4\sum_{i=1}^N\sum_{j=1}^Nx_ix_j +2\sum_{j=1}^Nx_j)^2=$$ $$2(N\sum_{i=1}^Nx_i^2-4\sum_{i=1}^N\sum_{j=1}^Nx_ix_j +2\sum_{i=1}^N\sum_{j=1}^Nx_ix_j=$$ $$2N\sum_{i=1}^Nx_i^2-2\sum_{i=1}^N\sum_{j=1}^Nx_ix_j.$$ This proves the desired equality.
\bf References
\rm \noindent Beck, C., Lu, B., Greevy, R. (2015), Nbpmatching: Functions for Optimal Non-Bipartite Matching. R package version 1.4.5.
\noindent https://cran.r-project.org/web/packages/nbpmatching
\noindent http://biostat.mc.vanderbilt.edu/wiki/Main/NonbipartiteMatching
\rm \noindent Bondy, J.A., Murty, U.S.R. (1976), Graph Theory with Applications, North-Holland, NY.
Garey, M.R., Johnson, D.S. (1979), Computers and Intractability A Guide to the Theory of NP-Completeness. Freeman and Co, NY.
\rm \noindent Garey, M.R., Johnson, D.S. (1979), Computers and Intractability A Guide to the Theory of NP-Completeness. Freeman and Co, NY.
\rm \noindent Lu, B., Greevy, R., Xu, X., Beck, C. (2011), Optimal Nonbipartite Matching and its Statistical Applications. The American Statistician. Vol. 65, no. 1, pp. 21-30, .
\noindent Papadimitriou, C.H., Steiglitz, K. (1982), Combinatorial Optimization: Algorithms and Complexity, Prentice-Hall, Englewood Cliffs, NJ.
\end{document} | arXiv |
\begin{definition}[Definition:Literal/Positive]
A '''positive literal''' is an atom $p$ of propositional logic.
\end{definition} | ProofWiki |
Environmental factors and wood qualities of African blackwood, Dalbergia melanoxylon, in Tanzanian Miombo natural forest
Kazushi Nakai ORCID: orcid.org/0000-0001-8580-58761,5,
Moriyoshi Ishizuka2,
Seiichi Ohta2,
Jonas Timothy3,
Makala Jasper3,
Njabha M. Lyatura4,
Victor Shau4 &
Tsuyoshi Yoshimura5
Journal of Wood Science volume 65, Article number: 39 (2019) Cite this article
A Correction to this article was published on 16 November 2019
This article has been updated
African blackwood (ABW) (Dalbergia melanoxylon) mainly occurs in the coastal areas of East Africa, including in Tanzania and Mozambique, and its heartwood is commonly known to be one of the most valuable materials used in the production of musical instruments. Although the heartwood is one of the most expensive timbers in the world, very low material yield has recently resulted in the significant reduction of natural individuals. This might have serious impact on local communities, because this tree is apparently the only species that can support their livelihood. Therefore, a solution to the problem is urgently needed in terms of the sustainable development of communities. In this study, we survey environmental factors (stand structure and soil properties) in the Miombo woodlands of southern Tanzania, where ABW was once widely distributed, to clarify the factors affecting growing conditions of ABW. Three community forests located in Kilwa District, Lindi, Tanzania, were selected as the survey sites, and 10–13 small plots (0.16 ha/plot) were randomly established at each site. In addition, the stem qualities of standing trees were evaluated by visual inspection rating and a non-destructive measurement of stress-wave velocity, for understanding the relationship between environmental factors and growth form. It was found that ABW was widely distributed under various environmental conditions with intensive population, and that their growth form depended on environmental factors. Since there was no significant difference of stress-wave velocities among the site, our findings suggest that the dynamic properties of ABW trees does not depend on growth conditions, which is generally influenced by various external factors. These results present important information regarding the sustainable forest management of ABW.
African blackwood (ABW, Dalbergia melanoxylon), commonly known as Mpingo in Swahili (trade name, grenadilla), is generally used in the manufacture of clarinets, oboes, bagpipes and other musical instruments. It has been traded to European countries for this purpose since the early nineteenth century [1]. ABW is valued as an appropriate material for musical instruments not only because of its exterior appearance, but also due to the precious advantages of the material. For example, the air-dried density of heartwood ranges from 1.1 to 1.3 g/cm3 [2, 3], while the loss factor (tanδ) is lower than that of other general hardwood species [4]. Since ABW is the only species that can meet the requirements for musical instrument production, the conservation of this timber resource is vitally important for a sustainable music industry.
African blackwood is now widely distributed throughout tropical Africa, found in at least 26 sub-Saharan countries including Tanzania, Kenya, Ethiopia and Nigeria [5]. It can grow under a wide range of conditions from semi-arid, to sub-humid, to tropical lowland areas [6, 7], and occurs in deciduous woodland, coastal bushland and wooded grassland, where the soils are sufficiently moist [5]. ABW is frequently observed in Miombo woodland, which covers approximately 10% of the African continent [8]. Miombo woodland is a semi-deciduous formation characterized by dominant trees in the genera Brachystegia, Julbernardia and Isoberlinia [9,10,11]. It supports the livelihood of 100 million people around the area who rely on products from this distinct and unique biome [12]. In addition, ABW is an economically important tree in many African woodlands, supporting local communities.
Currently, the local NGO, Mpingo Conservation & Development Initiative (MCDI), is working for sustainable forest conservation based on a Forest Stewardship Council (FSC)-certified forest in the southern part of Tanzania, Kilwa district, Lindi. MCDI focuses on a Participatory Forest Management system (PFM), which acts as a basic legal facilitator for Reducing Emissions from Deforestation and forest Degradation, plus the sustainable management of forests, and the conservation and enhancement of forest carbon stocks (REDD +). It gives local communities control and ownership of their local forest resources, including timber, through demarcated village land forest reserves (VLFRs), which would otherwise be controlled by the government [13, 14]. Its contribution to controlling illegal logging can also lead to improved local community forestry. ABW has become one of MCDI's most important species, not only in terms of historical utilization [15,16,17], but also for income generation.
As mentioned above, ABW is mainly used in the musical instrument industry, although it is also used for decorative objects such as traditional carvings [15, 16, 18, 19]. The general characteristics of ABW trees have been reported: average height, 5–7 m; multi-stemmed with a bole circumference normally < 120 cm; and irregularly shaped crown [5, 20]. Small trees tend to cause serious problems in the operation of sawmills due to lateral twists, deep fluting, and knots including cracks [21]. Such defects may affect the general performance of musical instruments. For example, the internal surface condition of the wood can impact acoustic attenuation in the cylindrical resonators of woodwind instruments [22]. As a result, sawmills can generate only a small amount of timber of the necessary quality, with an actual timber yield of 9% [23]. Meanwhile, intensive harvesting has induced a social concern about the sustainability of ABW resources. This inefficient utilization has made ABW one of the most highly priced timbers in the world, with a market rate of US$14,000–20,000 per m3 [1, 24], and has threatened the species' future existence [24, 25]. In fact, ABW has been designated as "near threatened" on the IUCN (International Union for Conservation of Nature) red list since 1998 [26], and since 2017, the trade of all existing Dalbergia species including ABW has been restricted worldwide by the CITES (Convention on International Trade in Endangered Species of Wild Fauna and Flora) treaty [27].
The main purpose of this study is to assess the potential of the ABW tree in terms of sustainable forest utilization. The relationship between distribution and environmental factors (surrounding vegetation and soil) must be clarified before sustainability of ABW in natural forest can be achieved. Although some difficulties have currently been noted in terms of the economic feasibility of ABW [23], forest management focused on this resource could continuously contribute to the local community forest due to the economical uniqueness of the wood. Therefore, valuable ABW that meets the requirements of musical instruments should be produced effectively by controlling appropriate growth conditions.
In general, the surrounding environment, including climate factors, soil type, and surrounding vegetation, has the potential to influence tree growth. Such environmental conditions have already been studied in some locations [28,29,30,31,32,33], however, the relationship between environmental conditions and wood quality has not been yet been clarified. In this study, some environmental conditions in the natural distribution areas of ABW were compared to determine the relationship between tree growth and wood quality. Our results can contribute to establishing sustainable forest management by local communities.
A forest survey was conducted in the southern part of Tanzania, Kilwa District, Lindi, which covers 13,347.5 km2 and is one of Tanzania's most densely forested districts [34]. More than 150,000 ha of this area has been designated FSC-certified forest supported by MCDI, and that has principally been community forests managed by a local group. For this study, three FSC-certified community forests (Kikole, Nainokwe, and Nanjirinji) were selected as samples (Fig. 1). In each forest, 9–11 small temporary plots (0.16 ha: 40 m × 40 m) were randomly set using GPS (eTrex, Garmin International Inc., Kansas, USA) and a laser range finder (TruPulse360, Laser Technology, Inc., Colorado, USA). A total of 31 plots were set as study sites: 11 in Kikole and Nainokwe, and 9 in Nanjirinji. In this study, 2 plots without ABW trees were included at each site as references. The survey was conducted in July and December 2017.
(District Boundary was adapted from The World Bank: https://energydata.info/dataset/tanzania-region-district-boundary-2012, Village Boundary and FSC Certified Forest area were adapted from MCDI: http://www.mpingoconservation.org/where-we-work/on-the-map/)
Location map of the survey sites
Vegetation survey
All living trees over 10 cm DBH (diameter at breast height: 1.3 m from the ground) were measured for DBH using a diameter tape. In the case of multi-stemmed trees less than 1.3 m above the ground, each stem was measured separately, and the biggest DBH stem was regarded as the individual DBH. The number of individuals was also counted in this way. Trees were tagged and classified by local species name. Each scientific name was finally identified as supplemental information referenced by previous survey reports [23, 29, 35] (Table 1). Furthermore, both tree height and branch height of ABW trees over 10 cm DBH were measured to evaluate the growth form of ABW. Basal area of each tree, G, was calculated by the following equation (Eq. 1):
$$G = \mathop \sum \limits_{k = 1}^{n} \left[ {\pi \left( {\frac{{D_{k} }}{2}} \right)^{2} } \right]$$
where Dk is the DBH of each tree, and k is the stem number of each tree species.
Table 1 Local and scientific names of trees in the survey sites [24, 30, 36]
Soil sampling and evaluation
Soil samples were collected from the center of each plot and defined as the equalized condition. At each sampling point, four soil cores were collected from 0–10, 45–55, 95–105 and 145–155 cm depth using a soil auger. Soil condition was evaluated in the field by Munsell soil color, finger soil texture, and soil pH (H2O) measurement by a glass electrode pH meter (pH meter D-51, HORIBA, Kyoto, Japan) with a soil suspension 1 (soil): 2.5 (distilled water) ratio. Soil color was evaluated under sunlight according to the standard Munsell soil color chart, and soil texture was determined by finger test for moist soil samples with reference to the widely used USDA system [36].
Evaluation of surface appearance
The surface appearance of all living ABW trees in each plot over 10 cm DBH was evaluated according to the following criteria with reference to a previous report [37]. The lower part of the stem, from 0.3 up to 1.3 m, was divided into quarters virtually (Fig. 2), and each part was classified into one of four grades (0, 1, 2, 3) based on the ratio of clear areas with no visible defects, including cracks, holes, piths, etc. (Table 2). The grade of each living tree was obtained using the average of the four quarters.
Surface appearance evaluation. The numbers, 1, 2, 3 and 4, indicate the 4 evaluated surfaces on each stem. The average grade of each stem was calculated by all results from every surface
Table 2 Grade list for wood evaluation
Measurement of stress-wave velocity
The dynamic physical properties of living ABW trees were evaluated by measuring stress propagation time in trees with a microsecond timer, FAKOPP (FAKOPP Enterprise, Agfalva, Hungary). Stress propagation time is generally related to the dynamic physical properties of materials; in particular, the time in the L-direction of timbers can be converted to the dynamic Young's modulus using material density. Both start and stop sensors were set on a tree surface at a fixed distance (1 m) at a height of 0.3–1.3 m on the L direction of the tree. A stress wave was input by a single tap of a specific hammer (Fig. 3). Sensors were struck into the bark (2 cm deep) at a 60° angle to the surface (Fig. 3). Although the angle for this test is normally 45° [38], a larger angle was needed in this study due to the significant hardness of ABW.
An experimental set-up for measuring stress propagation time using FAKOPP as some species
Stress-wave velocity Vs (m/s) was approximately calculated by the following equation (Eq. 2):
$$V_{\text{s}} = \frac{L}{T}$$
where L is defined as the distance (1 m) between sensors, and T indicates the average stress propagation time of each tree [12 replications per tree: 3 times per quarter (Fig. 2)].
Data treatment and statistical analysis
Classification and ordination of tree vegetation data were performed based on total G of each species, and tree population of each plot. Tree population was calculated from the number of individuals with the biggest DBH of all stems in the case of multi-stemmed trees. Data items were statistically compared by Kruskal–Wallis test to analyze the relative effect of each factor. In addition, at 1% critical difference the Steel–Dwass test was used as a supplementary test. Every referenced plot was analyzed by the same method.
Tree species composition
Figure 4 shows total G of all measured trees at each site calculated by Eq. 1. The total G values of the 3 sites were 15.07 m2/ha in Kikole, 9.64 m2/ha in Nainokwe, and 12.66 m2/ha in Nanjirinji; Nainokwe was the lowest total G value was separated from other 2 sites. The same trend was found at reference plots (Kikole: 4.10 m2/ha, Nainokwe: 2.66 m2/ha, Nanjirinji: 3.72 m2/ha). The average basal area of stands in Nainokwe was also smaller than the other sites, although the difference was not statistically significant at 1% level (Kikole–Nainokwe: p = 0.0260, Nainokwe-Nanjirinji: p = 0.6045, Kikole–Nanjirinji: p = 0.1271). The tree species diversity was the lowest in Nainokwe where only three dominant species (Mpingo (ABW), Miombo and Msolo) have occupied more than 68% of total basal area. The G values of ABW at the 3 sites were 5.27 m2/ha in Kikole, 4.19 m2/in Nainokwe, and 5.20 m2/ha in Nanjirinji. This was equal to ca. 35% of the total G value in Kikole, ca. 44% in Nainokwe, and ca. 41% in Nanjirinji (Fig. 4).
Total G per hectare by species for three survey sites
As shown in Table 3, the population density (number of individual trees/ha) of ABW was highest in Nainokwe (57.39 trees/ha), followed by Kikole (40.01 trees/ha), and Nanjiriji (31.94 trees/ha) (Table 3). In addition, the tree density of all species including ABW was also highest in Nainokwe (Table 3). Table 3 shows the growth form (DBH and tree height) of ABW in Nainokwe was also significantly smaller than at the other sites, whereas DBH of all species in Nainokwe was not statistically different from Nanjirinji (p = 0.6201) (Table 3).
Table 3 Comparison of specified parameters among 3 sites
Distribution of DBH and tree height of ABW trees are shown in Figs. 5 and 6, respectively. Nainokwe had an especially high number of small ABW trees (here we defined "small trees" as trees less than 20 cm DBH and 7 m height) (Fig. 5). The DBH distribution was quite different between Kikole and Nanjirinji, the number of mid-sized trees (20–40 cm DBH) in Kikole was also relatively larger than that of Nanjirinji, although tree height showed the same trend in both forests (Figs. 5, 6). Furthermore, in Kikole and Nainokwe there was a clear tendency of fewer trees with increased DBH, whereas Nanjirinji had a comparatively lower number of mid-sized trees (Fig. 5). Some big trees (DBH > 50 cm) were observed in all the sites, but there were fewer in Nainokwe (Fig. 5). Branch height was lowest in Nainokwe, although the difference was not statistically significant at 1% level (p = 0.0474) (Table 3).
Distribution of DBH at each site (ABW only)
Distribution of tree height at each site (ABW only)
Tables 4 and 5 show soil data for the 3 sites whereby several soil types were observed, depending on the sampling location (depth and plot). The soil of the Kikole site was most sandy compared to the other 2 sites, with a range from clay loam (CL) to sandy loam (SL) (Tables 4, 5). On the other hand, most of soil samples in Nainokwe and Nanjirinji were evaluated as clay (C), with white crystal-like calcium carbonate (Tables 4, 5). There were no significant differences in soil pH (H2O) between Nainokwe and Nanjirinji, but Kikole was significantly lower than the other sites (Table 4). Soils of yellowish to reddish colors (7.5YR–10.0YR in Munsell Color) were recorded for some plots in both Kikole and Nainokwe, whereas mostly dark-colored soil (blackish soil, less than 4.0 in color value) was observed in Nanjirinji. The same trend was also found in the reference plots (Tables 4, 5).
Table 4 Soil conditions in the three survey sites: soil texture, Munsell Color YR (mean ± SD), color value (mean ± SD) and pH (H2O) (mean ± SD)
Table 5 Soil texture data in all the sampling locations (S sand, LS loamy sand, SL sandy loam, SCL sandy clay loam, SC sandy clay, L loam, LC loamy clay, CL clay loam, C clay) including control plots (plot no. with a small letter, *)
Quality analysis of living trees
Evaluation values of the appearance of ABW trees were converted into an average grade: low: 0.00–0.99, middle: 1.00–1.99, or high: 2.00–3.00. Figure 7 shows the individual occurrence ratio of each grade in the Kikole, Nainokwe, and Nanjirinji sites. In Kikole and Nanjirinji, the majority of trees received a "Middle" grade, while Nanjirinji had a larger number of "High" appearance trees, over 30% (Fig. 7). On the other hand, most trees in Nainokwe were evaluated as "Low", and it had a much lower rate of "Middle" and "High" grade trees (Fig. 7).
Occurrence ratio of individuals evaluated at each site
As shown in Table 6, average stress-wave velocity (Vs) in Nanjirinji (2990 m/s) was higher than in the other sites (Kikole: 2808 m/s, Nainokwe: 2676 m/s); Vs in Nainokwe was the lowest value of all sites, and the difference compared to the Nanjirinji site was significant at 1% level (p < 0.001, Table 6) although there was no significant difference at 1% level among survey sites, Kikole and Nainokwe (p = 0.276), Kikole and Nanjirinji (p = 0.241).
Table 6 Average stress-wave velocity (Vs) of ABW trees in the survey sites
When all Vs data of ABW trees was plotted against DBH (Fig. 8) and appearance evaluation value (Fig. 9), there was interestingly no clear tendency although poor correlation was found between Vs and appearance grade (Vs–DBH: r = 0.0637, Vs–appearance grade: r = 0.2356). Furthermore, there was no relationship in their parameters of each site (Figs. 8, 9) even though DBH, height and appearance of trees in Nainokwe was, respectively, inferior to those of the other 2 sites (Table 3). In addition, Vs against each appearance grade was further compared for only over middle grade (1.00–3.00). Poor correlation was showed between all Vs and appearance grades (r = 0.2512), however, there was no significant difference at 1% level among survey sites (Kikole–Nainokwe: p = 0.1666, Nainokwe-Nanjirinji: p = 0.9852, Kikole–Nanjirinji: p = 0.0762).
Relationship between DBH and stress-wave velocity (Vs) of ABW. Each correlation coefficient of the site was, respectively, showed as follows; Kikole: r = − 0.1749; Nainokwe: r = − 0.0999; Nanjirinji: r = 0.1134)
Relationship between the evaluated appearance grade and stress-wave velocity (Vs) of ABW. Each correlation coefficient of the site was, respectively, showed as follows; Kikole: r = 0.2622, Nainokwe: r = 0.0523, Nanjirinji: r = − 0.1128
In this study, we found that it is possible for ABW to survive under various environment conditions with high relative dominance. Different vegetation types were observed depending on the sample location (Fig. 4), and the vegetation surrounding ABW tree location significantly influenced their growth. Nainokwe site was significantly different from the 2 other sites in terms of tree species composition and growth form (Fig. 4, Table 3). Nainokwe site is mainly covered by wooded grassland, while open woodland covers larger areas of Kikole [33]. Although there has not yet been an official report, Nanjirinji site could also be categorized into mostly open woodland because of its statistical similarity to the parameters of Kikole site (Table 3, Figs. 4, 5 and 6).
Generally, there are many low trees with lower branch height in wooded grassland compared to open woodland [33] (Table 3, Fig. 6). In particular, some ABW trees in Nainokwe showed relatively small DBH in conjunction with tree height compared to those of other sites (Table 3, Fig. 5). This forest had many juvenile ABW trees with small DBH and low height (Figs. 5, 6). Considering the diagnostic parameters listed in Tables 3 and 6, it seems that environmental impacts from forest parameters continuously influenced growth conditions.
On the other hand, the DBH of all trees in Kikole forest were significantly bigger than those of the other 2 sites, with an intensive number of mid-sized ABW trees, quite different from the Nanjirinji forest (Table 3, Fig. 5). This suggests that there might be a relationship between forest density and ABW regeneration. ABW has been known as a light-demanding species; thus, it might not regenerate under heavy closed vegetation [6, 39, 40]. In cases where the forest density is lower, ABW trees can also become multi-stemmed with smaller DBH and lower height. This is generally known as a typical physiological response. Trees in dense forests must compete for light, which places a premium on height growth, meaning that trees grow tall [32]. It was suggested that the significant difference of DBH distribution between Nainokwe and other sites was a result of the natural ABW habitat. Kikole forest apparently has the appropriate conditions under which ABW trees can coexist with other species because of both tree density and the number of individuals of each species (Table 3).
Furthermore, forest conditions including vegetation type generally depend on environmental factors such as topography, climate, and human activities. Tree growth can also be impacted by environmental factors such as topography, resource availability, and previous disturbance [31, 32]. The abundance, distribution, and diversity of vegetation tend to be strongly influenced by the qualities of the physical landscape, with plant species arising from both physical and chemical characteristics of the land [29]. Luoga et al. [41] reported that harvesting activity significantly affects the vegetation structure of woodlands, and the specific distribution of aged trees might be the result of clear-cutting of such trees [42]. Banda et al. [30] also reported that the gradient of land protection has been predicted to influence forest ecosystems in terms of growth form, regeneration, and species richness. As a result, some potential factors, including human activities such as fire and harvesting, have not yet been studied here. Further investigation should be conducted in terms of vegetation transition by human activities to clarify the specific distribution of ABW trees in natural forest.
Ilunga Muledi et al. [35] reported a variety of soil factors in a Miombo forest, and that vegetation was related to soil factors. In this study, we found a variety of soil types at the 3 sites: from sandy to clay, and with or without CaCO3 and/or Fe nodules (Tables 4, 5). However, the results clearly suggest that ABW can grow in a wide variety of soil types regardless of their properties. In addition, in this study, dark-colored soils from CL to C soil texture observed in some plots in the Nanjirinji (Table 5), which might have better physical (better drainage and water-retention) and better chemical (more nutrients) properties.
In general, soil color depends on major inorganic components and the amount of organic matter, which determines the physical properties of the top soil. High clay content results in a high capacity for stocking organic matter, so that soil color darkens. Heavier clayey alkali-soil with high CaCO3 content seems to affect root extension into deeper soil layers. In contrast, sandy soil (S), which was observed in Kikole, might have disadvantages for plant growth due to poor nutrients and low water holding capacity. The soils of Nainokwe were similar to those of Nanjirinji, although their vegetation obviously differed. We concluded that ABW trees could grow under a variety of soil types, and even where other plants cannot grow well. It has been suggested that rooting of ABW trees is not affected greatly by the soil condition due to their coexistence with mycorrhizal fungi, which fixes nitrogen and is commonly known to radiate out 30–50 m by root suckers [39, 43]. The survival of ABW was apparently the result of adaptation to a wide variety of soil conditions despite their less-competitive behavior in high-diversity dense forest.
Recently, studies of the relationship between tree growth and Vs have reported that velocity depends on planting density, which also influences tree-form properties such as bending, multi-stems, cracks, and decay. [37]. A positive relationship was observed between MOE and Vs of the living coniferous tree, Hinoki (Chamaecyparis obtusa Endle.) [38, 44, 45], and another positive relationship between wood hardness and Vs has been observed by using a stress wave timer in some tropical hardwoods (Nectandra cuspidata, Mezilaurus itauba and Ocotea guianensis) [46]. In addition, Vs, wood density and ultrasonic velocity which is another non-destructive measurement has also positively related to MOE of some planted hardwood trees (Melia azedarch, Shorea spp. and Maesopsis eminii) [47, 48]. Although wood density of the measured trees has not been evaluated in this study, the significant difference of wood density might result in the different Vs as shown in such current studies for other species. Evaluation of wood density thus should be needed for further discussing tree growth and wood quality. Vs is affected by defects such as cracks and pith including holes, because the stress-wave principally selects the shortest internal propagation route. Therefore, propagation time would be delayed by the existence of any serious defects between sensors. However, the physical quality of ABW was not significantly related to appearance conditions in this study, because there were only poor correlations between Vs and the appearance grades (Fig. 9), furthermore, Vs was also poorly correlated with appearance grades even in case of further analysis for only over middle grades.
African blackwood trees in Nainokwe site obviously had a worse appearance than those in the other 2 sites with the lower parameters in this study (Fig. 7, Table 3). This might have been due to the co-relationship between the environmental conditions and tree growth, although their growth rate have not completely evaluated yet. Trees on fertile, well-drained soils such as loam can grow rapidly, thus resulting in high density forest [33], but promoting fluting [31]. Fluting severity has been positively correlated with tree growth and branch height in Western Hemlock trees (Tsuga heterophylla) [49]. Furthermore, disturbances such as clear-cutting and mechanical stress can also induce more fluting [31]. Karlinasari et al. [48] also showed that negative correlations were found between wood quality traits (wood density, dynamic MOE and ultrasonic velocity) and tree volume at the planting sites of same aged trees. Since the stress-wave velocities (Vs) were not significantly different among survey sites with a variety of soil/landscape conditions (Table 6, Figs. 8 and 9), our findings suggest that the dynamic physical properties of ABW trees are not related to growth conditions in the natural forest, which is generally influenced by various external factors.
In this study, both the environmental conditions and physical properties of living ABW trees were investigated to figure out the appropriate conditions for growth and quality requirements as musical instruments. ABW can survive under various environmental conditions with intensive population. However, the trees living under inferior conditions in wooded grassland (Nainokwe) tended to have smaller DBH, lower height, and worse appearance. By contrast, the trees in open woodland, Kikole and Nanjirinji, showed better qualities in tree form and appearance. Especially, the trees tended to have larger DBH, higher height, and better appearance in Nanjirinji site where the soils with better properties were mostly observed. This suggested that soil condition could influence ABW growth. The difference of ABW growth form might be related to the light-demanding, and the influence of the struggle against other plant species. There was no significant difference in stress-wave velocities of living ABW trees from all 3 sites, even though we observed significant environmental effects on tree appearance. We therefore concluded that there were no significant effects of external factors on the real physical properties of trees as timber materials. Forest management should focus on producing high-yield trees with bigger DBH and higher branch height to achieve sustainability of ABW resources as an industrial material. Moreover, methods to increase the growth process while maintaining original specifications (i.e., dark-colored heartwood, high density) are needed in natural forest. We think that sustainable and healthy forest should be based on sustainable wood utilization.
As mentioned earlier, ABW is an endangered species, and thus plantations with proper management must be undertaken in near future, together with novel approaches for the effective utilization of currently unused parts of the trees. The results obtained in this study may contribute significantly to the sustainable production and utilization of this precious timber resource.
In the original publication of the article [1], the scientific name of Mlondondo was misspelled as "Xeoderis stuhlmannii" instead of "Xeroderris stuhlmannii" in Table 1. The corrected table 1 is given in this correction article.
ABW:
NGO:
Non-Government Organization
MCDI:
Mpingo Conservation & Development Initiative
FSC:
PFM:
Participatory Forestry Management System
REDD+:
Reducing Emissions from Deforestation and forest Degradation, plus the sustainable management of forests, and the conservation and enhancement of forest carbon stocks
VLFRs:
village land forest reserves
IUCN:
DBH:
diameter of breast height
G :
basal area of each tree
D k :
the DBH of each tree
k :
the stem number of each tree
V s :
stress-wave velocity
LS:
loamy sand
SCL:
sandy clay loam
SC:
sandy clay
LC:
loamy clay
CL:
clay loam
MOE:
modulus of elasticity
Cunningham AB, Manalil S, Flower K (2015) More than a music tree: 4400 years of Dalbergia melanoxylon trade in Africa. S Afr J Bot 98:167. https://doi.org/10.1016/j.sajb.2015.03.004
Malimbwi RE, Luoga EJ (2000) Prevalence and standing volume of Dalbergia melanoxylon in coastal and inland sites of southern Tanzania. J Trop For Sci 12(2):336–347
Sprofman R, Zauer M, Wagenfur A (2017) Characterization of acoustic and mechanical properties of common tropical woods used in classical guitars. Res Phys 7:1737–1742. https://doi.org/10.1016/j.rinp.2017.05.006
Brémaud I, El Kaïm Y, Guibal D, Minato K, Thibaut B, Gril J (2012) Characterisation and categorisation of the diversity in viscoelastic vibrational properties between 98 wood types. Ann For Sci 69(3):373–386. https://doi.org/10.1007/s13595-011-0166-z
Sacandé M, Vautier H, Sanon M, Schmit L (eds) (2007) Dalbergia melanoxylon Guill. & Perr. Seed Leaflet 135
Orwa C, Mutua A, Kindt R, Jamnadass R, Simons A (1994) Agroforestry database: a tree species reference and selection guide version 4.0. World Agroforestry Centre ICRAF, Nairobi, KE. http://www.worldagroforestry.org/sites/treebs/treedatabases.asp. Accessed 25 Jan 2017
Nshubemuki L, Mugasha AG (1995) Chance discoveries and germ-plasm conservation in tanzania: some observations on 'reserved' trees. Environ Conserv 22(1):51–55. https://doi.org/10.1017/S037689290003407X
Millington AC, Chritchley RW, Douglas TD, Ryan P (1994) Prioritization of indigenous fruit tree species based on formers evaluation criteria: some preliminary results from central region, Malawi. In: Proceedings of the regional conference on indigenous fruit trees of the Miombo ecozone of Southern Africa, Mangochi, Malawi, 23–27 January 1994
White F (1983) The Zambezian regional centre of endemism. In: White F (ed) The vegetation of Africa: a descriptive memoir to accompany the UNESCO/AETFAT/UNSO vegetation map of Africa (Natural Resources Research 20). UNESCO, Paris, pp 86–101
Campbell B, Frost P, Byron N (1996) Miombo woodlands and their use: overview and key issues. In: Campbell B (ed) The Miombo in transition: woodlands and welfare in Africa. Center for International Forestry Research (CIFOR), Bogor, pp 1–10
Desanker PV, Frost PGH, Justice CO, Scholes RJ (eds) (1997) The miombo network: framework for a terrestrial transect study of land-use and land-cover change in the Miombo ecosystems of central Africa. IGBP Report 41, The International Geosphere-Biosphere Programme (IGBP), Stockholm
Campbell BM, Angelsen A, Cunningham A, Katerere Y, Sitoe A, Wunder S (2007) Miombo woodlands: opportunities and barriers to sustainable forest management. Centre for International Forestry Research (CIFOR), Bogor
URT (United Republic of Tanzania) (2007) Prime Minister's Office, Information about Lindi region, Kilwa District. http://lindi.go.tz/limdi/limdi-rural/. Accessed 1 May 2018
Khatun K, Corbera E, Ball S (2017) Fire is REDD+: offsetting carbon through early burning activities in south-eastern Tanzania. Oryx 51(1):43–52. https://doi.org/10.1017/S0030605316000090
Bryce JM (1967) The commercial timbers of Tanzania. Forest Division, Ministry of Agriculture & Co-operatives, Moshi
Mbuya LP, Msanga HP, Ruffo CK, Birnie A, Tengnas BO (1994) Useful trees and shrubs for Tanzania: identification, propagation and management for agricultural and pastoral communities. Regional Soil Conservation Unit, Swedish International Development Authority, Nairobi
Ball SMJ (2004) Stocks and exploitation of East African blackwood Dalbergia melanoxylon: a flagship species for Tanzania's Miombo woodlands? Oryx 38(3):266–272. https://doi.org/10.1017/S0030605304000493
Burkill HM (1995) Useful plants of west tropical Africa, vol 3. Royal Botanic Gardens Kew, London
Christian MY, Chirwa PW, Ham C (2008) The influence of tourism on the woodcarving trade around Cape Town and implications for forest resources in southern Africa. Dev South Afr 25(5):577–588. https://doi.org/10.1080/03768350802447800
Lemmens RHMJ (2008) Dalbergia melanoxylon Guill. & Perr. In: Louppe D, Oteng-Amoako AA, Brink M (eds) Plant resources of tropical Africa. Available via DIALOG. https://uses.plantnet-project.org/en/Dalbergia_melanoxylon_(PROTA). Accessed 25 Nov 2017
Lovett J (1987) Mpingo—the African blackwood. Swara 10:27–28
Boutin H, Le Conte S, Vaiedelich S, Fabre B, Le Carrou JL (2017) Acoustic dissipation in wooden pipes of different species used in wind instrument making: an experimental study. J Acoust Soc Am 141(4):2840–2848. https://doi.org/10.1121/1.4981119
Gregory A, Ball SMJ, Eziefula UE (1999) Tanzanian Mpingo 98 Full Report. Mpingo Conservation Project, Tanzania
Jenkins M, Oldfield S, Aylett T (2002) International trade in African blackwood. Fauna & Flora International, Cambridge
Hamisy WC, Hantula J (2002) Characterization of genetic variation in African Blackwood, Dalbergia melanoxylon using random amplified microsatellite (RAMS) method. Plant genetic resources and biotechnology in Tanzania, Part 1: biotechnology and social aspects. In: Proceedings of the second national workshop on plant genetic resources and biotechnology, Arusha, Tanzania, 6–10 May 2002
World Conservation Monitoring Centre (1998) Dalbergia melanoxylon. The IUCN red list of threatened species. 1998. http://dx.doi.org/10.2305/IUCN.UK.1998.RLTS.T32504A9710439.en. Accessed 10 June 2018
UNEP-WCMC (2017) Review of selected Dalbergia species and Guibourtia demeusei. UNEP-WCMC, Cambridge
Munishi PKT, Shear TH, Wentworth T, Temu RAPC (2007) Compositional gradients of plant communities in submontane rainforests of eastern Tanzania. J Trop For Sci 19:35–45
Munishi PKT, Temu RAPC, Soka G (2011) Plant communities and tree species associations in a Miombo ecosystem in the Lake Rukwa basin, Southern Tanzania: implications for conservation. J Ecol Nat Environ 3(2):63–71
Banda T, Schwartz MW, Caro T (2006) Woody vegetation structure and composition along a protection gradient in a Miombo ecosystem of western Tanzania. For Ecol Manag 230(1–3):179–185. https://doi.org/10.1016/j.foreco.2006.04.032
Julin KR, Shaw CG, Farr WA, Hinckley TM (1993) The fluted western hemlock of Alaska II: stand observations and synthesis. For Ecol Manag 60(1–2):133–141. https://doi.org/10.1016/0378-1127(93)90027-K
Koch GW, Sillett SC, Jennings GM, Davis SD (2004) The limits to tree height. Nature 428(6985):851
Mariki AS, Wills AR (2014) Environmental factors affecting timber quality of African Blackwood (Dalbergia melanoxylon). Mpingo Conservation & Development Initiative, Kilwa Masoko
Miya M, Ball SMJ, Nelson FD (2012) Drivers of deforestation and forest degradation in Kilwa District. Mpingo Conservation & Development Initiative, Kilwa Masoko, pp 1–34
Ilunga Muledi J, Bauman D, Drouet T, Vleminckx J, Jacobs A, Lejoly J, Meerts P, Shutcha MN (2016) Fine-scale habitats influence tree species assemblage in a Miombo forest. J Plant Ecol 10(6):958–969. https://doi.org/10.1093/jpe/rtw104
Rowell DL (2014) Soils in the field. In: Rowell DL (ed) Soil science: methods & applications. Longman Group, London, pp 1–16. https://doi.org/10.4324/9781315844855
Fukuchi S, Yoshida S, Mizoue N, Murakami T, Kajisa T, Ohta T, Nagashima K (2011) Analysis of the planting density toward low-cost forestry: a result from the experimental plots of Obi-sugi planting density. J Jpn For Soc 93(6):303–308 (in Japanese)
Fujisawa Y, Kashiwagi M, Inoue Y, Kuramoto N, Hiraoka Y (2005) An application of FAKOPP to measure the modulus of stem elasticity of hinoki (Chamaecyparis obtusa Endl.). Kyushu J For Res 58:142–143 (in Japanese)
Washa BW (2008) Dependence of Dalbergia melanoxylon natural populations on root suckers germination. Asian J Afr Stud 24(32):177–198
Ball SMJ, Smith AS, Keylock NS, Manoko L, Mlay D, Morgan ER, Ormand JRH, Timothy J (1998) Tanzanian Mpingo '96 Final Report. Mpingo Conservation Project, Fauna & Flora International, Cambridge
Luoga EJ, Witkowski ETF, Balkwill K (2004) Regeneration by coppicing (resprouting) of Miombo (African savanna) trees in relation to land use. For Ecol Manag 189(1–3):23–35. https://doi.org/10.1016/j.foreco.2003.02.001
Jew EK, Dougill AJ, Sallu SM, O'Connell J, Benton TG (2016) Miombo woodland under threat: consequences for tree diversity and carbon storage. For Ecol Manag 361:144–153. https://doi.org/10.1016/j.foreco.2015.11.011
Washa BW, Nyomora AMS, Lyaruu HMV (2012) Improving propagation success of D. melanoxylon (African blackwood) in Tanzania (II): rooting ability of stem and root cuttings of Dalbergia melanoxylon (African blackwood) in response to rooting media sterilization in Tanzania. Tanzan J Sci 38(1):43–53
Ikeda K, Arima T (2000) Quality evaluation of standing trees by a stress-wave propagation method and its application II: evaluation of sugi stands and application to production of sugi (Cryptomeria japonica D. Don) structural square sawn timber. Mokuzai Gakkaishi 46(3):189–196 (in Japanese)
Ishiguri F, Kawashima M, Iizuka K, Yokota S, Yoshizawa N (2006) Relationship between stress-wave velocity of standing tree and wood quality in 27-year-old Hinoki (Chamaecyparis obtusa Endl.). J Soc Mater Sci 55(6):576–582 (in Japanese)
Da Silva F, Higuchi N, Nascimento CC, Matos JLM, de Paula EVCM, dos Santos J (2014) Nondestructive evaluation of hardness in tropical wood. J Trop For Sci 26(1):69–74
Van Duong D, Matsumura J (2018) Within-stem variations in mechanical properties of Melia azedarach planted in northern Vietnam. J Wood Sci 64:329–337. https://doi.org/10.1007/s10086-018-1725-9
Karlinasari L, Andini S, Worabai D, Pamungkas P, Budi SW, Siregar IZ (2018) Tree growth performance and estimation of wood quality in plantation trials for Maesopsis eminii and Shorea spp. J For Res 29(4):1157–1166. https://doi.org/10.1007/s11676-017-0510-8
Singleton R, DeBell DS, Marshall DD, Gartner BL (2003) Eccentricity and fluting in young—growth western hemlock in Oregon. West J Appl For 18(4):221–228
We thank JIFPRO members Kazuki Shibasaki and Yuhei Tanahashi, and the staff of Mpingo Conservation & Development Initiative, Joseph Protas, Iddy Emillius and others, for their helpful assistance and efforts in conducting this study. We also thank all people of study villages for their valuable time and understanding during our fieldwork.
A part of this article was presented at 2018 SWST/JWRS International Convention, Nagoya, Japan, November 2018.
This work was supported by the Japan International Cooperation Agency (JICA) as a part of the BOP business promotion survey Preparatory Survey on BOP Business for Sustainable Procurement of FSC certificated Wood.
Musical Instruments & Audio Products Production Unit, Yamaha Corporation, 10-1 Nakazawa-cho, Naka-ku, Hamamatsu, 430-8650, Japan
Kazushi Nakai
Japan International Forestry Promotion & Cooperation Center, Rinyu Building, 1-7-12 Koraku, Bunkyo-ku, Tokyo, 112-0004, Japan
Moriyoshi Ishizuka & Seiichi Ohta
Mpingo Conservation & Development Initiative, P.O. Box 49, Kilwa Masoko, Kilwa, Lindi, Tanzania
Jonas Timothy & Makala Jasper
Kilwa District Council, P.O. Box 160, Kilwa Masoko, Kilwa, Lindi, Tanzania
Njabha M. Lyatura & Victor Shau
Research Institute for Sustainable Humanosphere, Kyoto University, Gokasho, Uji, Kyoto, 611-0011, Japan
Kazushi Nakai & Tsuyoshi Yoshimura
Moriyoshi Ishizuka
Seiichi Ohta
Jonas Timothy
Makala Jasper
Njabha M. Lyatura
Victor Shau
Tsuyoshi Yoshimura
KN, MI and SO designed and mainly conducted the survey in this manuscript. KN analyzed and interpreted the data with MI, SO and TY. JT and MJ supported to implement survey and contributed to understand the general situation of local community forest. NML and VS also assisted in data collection including identification of local trees. All authors read and approved the final manuscript.
Correspondence to Kazushi Nakai.
Nakai, K., Ishizuka, M., Ohta, S. et al. Environmental factors and wood qualities of African blackwood, Dalbergia melanoxylon, in Tanzanian Miombo natural forest. J Wood Sci 65, 39 (2019). https://doi.org/10.1186/s10086-019-1818-0
Accepted: 25 July 2019
Dalbergia melanoxylon | CommonCrawl |
\begin{document}
\newcommand{\uu}[1]{\underline{#1}} \newcommand{\pp}[1]{\phantom{#1}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\varepsilon}{\varepsilon} \newcommand{\iota}{\iota} \newcommand{\varsigma}{\varsigma} \newcommand{{\,\rm Tr\,}}{{\,\rm Tr\,}} \newcommand{\pol}{\textstyle{\frac{1}{2}}} \newcommand{l_{^{\!\bar{}}}}{l_{^{\!\bar{}}}}
\title{Quantum chaotic systems with arbitrarily large Ehrenfest times} \author{Maciej Kuna} \affiliation{ Wydzia{\l} Fizyki Technicznej i Matematyki Stosowanej\\ Politechnika Gda\'nska, 80-952 Gda\'nsk, Poland}
\begin{abstract} A class of time independent and physically meaningful Hamiltonians leads to evolution of observable quantities whose Ehrenfest times are arbitrarily large. This fact contradicts the popular claim that the true chaos is in quantum mechanics excluded by first principles. \end{abstract} \pacs{05.45.Mt, 05.45.-a} \maketitle
In his introductory remarks to one of the conferences on quantum chaos Michael Berry formulated a kind of credo of theorists working on chaotic aspects of quantum systems: ``There is no chaos in quantum mechanics.(...) In all except some very special cases (e.g. the `quantum' system got by regarding the Liouville equation of a chaotic classical system as a Schr\"odinger equation, whose specialness is that its `Hamiltonian' is linear in the `momenta') $\hbar$ smoothes away the fine classical phase-space structure, and prevents chaos from developing. The inaccurate phrase `quantum chaos' is simply shorthand, denoting quantum phenomena characteristic of classically chaotic systems, quantal `reflections' or `parallels' of chaos..." \cite{Berry}. The true chaos --- involving hypersensitivity to initial conditions, strange attractors and the like --- is believed to be excluded from quantum mechanics by first principles. The orthodox faith of quantum physicists allows for chaos only in the very limited sense of a property of semiclassical approximations. This is why one of the classic textbooks on the subject has only signatures of chaos in the title \cite{Haake}, and an appropriate entry of the current PACS scheme reads: ``Semiclassical chaos (`quantum chaos')".
There are various reasons why chaos is claimed to be impossible in quantum mechanics, but two of them seem most suggestive. First, the dynamics of quantum states is unitary hence linear, while it is known that chaos in autonomous systems occurs for nonlinear evolutions. Secondly, the initial conditions in phase space are not given exactly due to the uncertainty principle for $\bm p$ and $\bm q$; in consequence the ``Ehrenfest time", determining for how long a quantum system can follow a classical chaotic trajectory, cannot be larger than a certain value determined by the Planck constant. The artificial example of a chaotic `quantum' system mentioned by Berry and discussed for the first time in \cite{CIS}, reduces essentially to the following calculation
\begin{eqnarray} i\hbar\dot \psi\big(\bm x(t)\big)=\dot{\bm x}(t)\cdot i\hbar\bm \nabla \psi(\bm x)\big|_{\bm x=\bm x(t)}=H\psi\big(\bm x(t)\big)\label{K} \end{eqnarray} where $\bm x(t)$ is a solution of a classical problem. The formula has physically nothing to do with quantization although formally it has a Schr\"odinger form with $H$ linear in $\bm p=-i\hbar\bm \nabla$, and the Hamiltonian can be made self-adjoint if one appropriately defines a scalar product.
The goal of this note is to show that, in spite of the above arguments, there exists a class of quantum systems with physically meaningful time-independent Hamiltonians, whose Ehrenfest times can be arbitrarily large. The necessary condition for chaos is here the same as in classical physics: The evolution equation for some {\it observables\/} must be nonlinear.
Indeed, let us consider the time-independent Hamiltonian \begin{eqnarray} H=\frac{\bm P^2}{2M}-\frac{1}{2}\big(\bm x\cdot \bm F+\bm F\cdot \bm x\big)\label{H} \end{eqnarray} where $\bm P$ and $\bm F$ commute. The Heisenberg equation of motion \begin{eqnarray} \dot {\bm P}=\frac{1}{i\hbar}[\bm P, H]=\bm F\label{F} \end{eqnarray} shows that $\bm F$ is a force. If $\bm F$ is a constant vector then $H$ describes, in particular, the dynamics of an atom of mass $M$ falling freely in gravitational field. Now, we know that the free fall is an idealization and various velocity dependent friction forces often occur. This is true also in the quantum case if, for example, the atom falls in the presence of a laser light. The Hamiltonian (\ref{H}) with $\bm F$ depending on $\bm P$ is the simplest toy model of atomic cooling by light forces.
Friction forces occurring in realistic systems are nonlinear and thus the Heisenberg equation (\ref{F}) is in general nonlinear as well. For example the simple quadratic force \begin{eqnarray} \bm F &=& \left( \begin{array}{c} -\sigma P_1+ \sigma P_2\\ \tau P_1 - P_2\\ -\beta P_3 \end{array} \right) + P_1 \left( \begin{array}{c} 0\\ -P_3\\ P_2 \end{array} \right) , \end{eqnarray} where $\beta$, $\sigma$, $\tau$, are constant parameters, implies \begin{eqnarray}
\dot P_1 &=& \frac{1}{i\hbar} [P_1,H] = \sigma (P_2 - P_1), \\
\dot P_2 &=& \frac{1}{i\hbar} [P_2, H] = P_1(\tau - P_3) - P_2,\\
\dot P_3 &=& \frac{1}{i\hbar}[P_3,H] = P_1 P_2 -\beta P_3, \end{eqnarray} which is nothing else but the chaotic Lorenz system \cite{Lor}. The corresponding $H$ is an invariant of the dynamics, but not of the same type as the Ku\'s invariants occurring for some choices of $\beta$, $\sigma$, and $\tau$ \cite{Kus}.
A solution of the above Lorenz system is an operator whose action on momentum-space wave function is \begin{eqnarray} P_k(t)\psi(\bm p) &=& f_k(t,\bm p)\psi(\bm p), \end{eqnarray} where $f_k(t,\bm p)$ is a solution of the classical dynamical problem \begin{eqnarray}
\dot f_1 &=& \sigma (f_2 - f_1), \\
\dot f_2 &=& f_1(\tau - f_3) - f_2,\\
\dot f_3 &=& f_1 f_2 -\beta f_3, \end{eqnarray} with initial conditions $f_k(0,\bm p)=p_k$; $\psi(\bm p)$ is the wave function at $t=0$. The average value of the evolving observable thus reads \begin{eqnarray}
\langle\psi|P_k(t)|\psi\rangle=\int d^3p\, f_k(t,\bm p)|\psi(\bm p)|^2.\label{av} \end{eqnarray} The finite Ehrenfest time is typically claimed to be a consequence of the uncertainty principle in phase space, i.e. the minimal volume $\Delta p_j\Delta q_j\geq \hbar$. Eq.~(\ref{av}) explains why the dynamics of average momentum involves in this example an arbitrarily large Ehrenfest time: There is no uncertainty principle limiting the volume $\Delta p_1\Delta p_2\Delta p_3$. In the limiting case of an eigenstate of momentum operators, where the wave packet shrinks to the Dirac delta centered at $\bm k$, the average follows the trajectory $f_k(t,\bm k)$, i.e. the Ehrenfest time becomes infinite.
The fact that $\hbar$ cannot lead to any upper bound on the Ehrenfest time trivially follows also from the fact that (\ref{av}) is not in any sense related to the Planck constant; $\hbar$ is absent in both the evolution equation (which is just the Lorenz system) and the initial probability density, which is arbitrary. The form of (\ref{av}) is identical to the classical expression for an average trajectory and thus the standard classical estimates \cite{Schuster} based on the maximal Lyapunov exponent or Kolmogorov-Sinai entropy are valid and can be employed without any modification.
The Hamiltonian we have used cannot be claimed to be unphysical or exotic. One can only complain that it is too simple to be a realistic description of atoms interacting with external fields but, of course, the same happens in the classical theory of Hamiltonian chaos. The celebrated Henon-Heiles system is just a toy model of many-body gravitational interactions \cite{HH}.
$H$ is certainly not linear in momentum (the case mentioned by Berry), although it is linear in $\bm x$. Hamiltonians containing terms linear in some observable occur in so many applications that one can even claim they are generic in quantum mechanics. Many phenomenological quantum models describing interactions with systems whose structure is too complicated to allow for first-principle modeling (e.g. systems interacting with reservoirs) involve such interaction terms. The fact that $H$ contains a third-order interaction is also not strange: Third-order polynomials are the simplest functions that lead to nonlinear Heisenberg equations, and nonlinearity is certainly a necessary condition for chaos, at least in time-independent systems. Translating (\ref{H}) into quantum optical terms we get a three-mode interaction that includes squeezing and two- and three-photon processes. In time-dependent Hamiltonians one can introduce chaos by an appropriate choice of chaotic maps, such as the Arnold cat map discussed by Weigert \cite{W1,W2,W3}, and then the Heisenberg evolution can be piecewise linear.
Let me try to put all of this in a wider context. How is it possible that in spite of linearity of quantum mechanics we have systems evolving chaotically? The answer is very simple: Heisenberg-picture equations are typically nonlinear --- this is anyway why one can speak of nonlinear quantum optics. Quantum chaos can exist because operator equations for observables can be chaotic. Although certain tools for investigation of chaotic properties at the level of observables were prepared in the literature a long time ago \cite{LE}, one could not find an example of a chaotic Heisenberg equation. The Hamiltonians (\ref{H}) solve the problem in a trivial way \cite{?}.
I'm indebted to Marek Czachor, Adam Majewski, and Stefan Weigert for discussions that helped to improve the argument and its presentation.
\end{document} | arXiv |
Bill buys a stock that decreases by $20\%$ on the first day, and then on the second day the stock increases by $30\%$ of its value at the end of the first day. What was the overall percent increase in Bill's stock over the two days?
Let the original value of the stock be $x$. At the end of the first day, the stock has fallen to $.8x$. On the second day, the stock rises to $1.3(.8x)=1.04x$. Thus, the stock has increased $\boxed{4}$ percent from its original price over the two days. | Math Dataset |
\begin{definition}[Definition:Rooted Tree]
A '''rooted tree''' is a tree with a countable number of nodes, in which a particular node is distinguished from the others and called the '''root node''':
:300px
\end{definition} | ProofWiki |
\begin{document}
\title{\textbf{\large{Horizontal resolution in a nested-domain WRF simulation: a Bayesian analysis approach}}}
\author{\centerline{\textsc{Michel d. S. Mesquita\footnote{}}}\\
\centerline{\textit{\footnotesize{Uni Climate, Uni Research and Bjerknes Centre for Climate Research, Bergen, Norway}}}\\ \centerline{\textit{\footnotesize{*Corresponding author email: [email protected]}}} \and \centerline{\textsc{Bj\o rn \AA dlandsvik}}\\% Add additional authors, different insitution \centerline{\textit{\footnotesize{Institute of Marine Research, Bergen, Norway}}} \and \centerline{\textsc{Cindy Bruy\`{e}re}}\\% Add additional authors, different insitution \centerline{\textit{\footnotesize{National Center for Atmospheric Research, Boulder,CO, USA}}} \and \centerline{\textsc{Anne D. Sandvik}}\\% Add additional authors, different insitution \centerline{\textit{\footnotesize{Institute of Marine Research, Bergen, Norway}}} }
\ifthenelse{\boolean{dc}} { \twocolumn[ \begin{@twocolumnfalse} \amstitle
\begin{center} \begin{minipage}{13.0cm} \begin{abstract}
The fast-paced development of state-of-the-art limited area models and faster computational resources have made it possible to create simulations at increasing horizontal resolution. This has led to a ubiquitous demand for even higher resolutions from users of various disciplines. This study revisits one of the simulations used in marine ecosystem projects at the Bjerknes Centre. We present a fresh perspective on the assessment of these data, related more specifically to: a) the value added by increased horizontal resolution; and b) a new method for comparing sensitivity studies. The assessment is made using a Bayesian framework for the distribution of mean surface temperature in the Hardanger fjord region in Norway. Population estimates are calculated based on samples from the joint posterior distribution generated using a Monte Carlo procedure. The Bayesian statistical model is applied to output data from the Weather Research and Forecasting (WRF) model at three horizontal resolutions (9, 3 and 1 km) and the ERA Interim Reanalysis. The period considered in this study is from 2007 to 2009, for the months of April, May and June.
\newline
\begin{center}
\rule{38mm}{0.2mm}
\end{center} \end{abstract} \end{minipage} \end{center} \end{@twocolumnfalse} ] } { \amstitle \begin{abstract} The fast-paced development of state-of-the-art limited area models and faster computational resources have made it possible to create simulations at increasing horizontal resolution. This has led to a ubiquitous demand for even higher resolutions from users of various disciplines. This study revisits one of the simulations used in marine ecosystem projects at the Bjerknes Centre. We present a fresh perspective on the assessment of these data, related more specifically to: a) the value added by increased horizontal resolution; and b) a new method for comparing sensitivity studies. The assessment is made using a Bayesian framework for the distribution of mean surface temperature in the Hardanger fjord region in Norway. Population estimates are calculated based on samples from the joint posterior distribution generated using a Monte Carlo procedure. The Bayesian statistical model is applied to output data from the Weather Research and Forecasting (WRF) model at three horizontal resolutions (9, 3 and 1 km) and the ERA Interim Reanalysis. The period considered in this study is from 2007 to 2009, for the months of April, May and June. \end{abstract}
}
\section{Introduction}
The need for high-resolution data has become important in several disciplines. These data provide added information, as for example, to the study of complex topography regions such as the Norwegian fjords \citep{heikkilaetal2011,myksvolletal2012}. However, producing such data, using a limited area model, can still be constrained by the computing resources available. For example, in order to make inferences about a model simulation, one needs a large sample to produce robust statistics \citep{lopezetal2006}. Producing large samples at high-resolution can become computationally expensive. This is especially the case when testing different combinations of parameterization schemes or a different model setup.
In this study, we present an alternative approach to analyzing output from limited area models based on Bayesian probability. Bayesian probability theory has been increasingly applied to regional climate modeling experiments in the past few years \citep{buseretal2009,buseretal2010}. The approach presented here allows one to make use of small samples to make inferences about the statistical population. The use of probability distributions also provides a richer view of the data for comparison against observations. The next session will discuss the data, methods and the Bayesian approach. Section 3 will present the results, which will be followed by the conclusion in Section 4.
\section{Data and Methods}
The experiments were made using the Weather Research and Forecasting (WRF) model version 3.1. Figure \ref{f1} shows the domain configuration, which consisted of a parent domain at 9 km resolution and two nested domains at 3 km and 1 km, respectively (with feedback$=$1, two-way nesting). They were run using 31 vertical levels. The microphysical scheme chosen was the WRF Single-Moment 3-class scheme (mp\_physics$=$3). The cumulus parameterization option was turned off (cu\_physics$=$0). The planetary boundary layer scheme was the Yonsei University scheme (bl\_pbl\_physics$=$1). The longwave radiation scheme used was the RRTM scheme (ra\_lw\_physics$=$1) and the shortwave radiation was the Dudhia scheme (ra\_sw\_physics$=$1).
\begin{figure}
\caption{WRF model domain setup: parent domain at 9 km (outer domain), nest at 3 km (d02) and nest at 1 km (d03).}
\label{f1}
\end{figure}
ECMWF ERA-interim Re-Analysis was used as the lateral boundary condition data. These data have been obtained from the ECMWF Data Server. The simulation was run from 2007 to 2009. The months of April, May and June of 2008 and 2009 were retained for the analysis. Here, results will be shown with respect to the three-hourly 2 m temperature in the Hardanger fjord region for the month of April. The box selected for the spatial averaging is located between 59.32$^\circ$N, 60.75$^\circ$N and 5.05$^\circ$E, 7.90$^\circ$E. From the timeseries created, we have randomly selected 200 timesteps for calculating the sample mean and variance.
An informative prior was selected based on the Kvams\o y weather station located at 60.358$^\circ$N and 6.275$^\circ$E. These data were obtained from the Norwegian Meteorological Institute data server at $eklima.no$. The Kvams\o y weather station has been operational since November 2003. The average surface temperature for April is 7.48$\pm$1.27$^\circ$C for the years of 2003 to 2011.
\subsection{The Bayesian model} In this study, the Bayesian model is applied to the 2m temperature in the Hardanger fjord region. It considers the case in which the mean ($\theta$) and variance ($\sigma^2$) are unknown \citep{hoff2009, gelmanetal2004}. For the joint prior distribution $p(\theta,\sigma^2)$ for $\theta$ and $\sigma^2$, the posterior inference will use Bayes' rule, as shown in Equation \ref{eq1}:
\begin{equation}\label{eq1} p(\theta,\sigma^2\mid y_1,\ldots,y_n)=\frac{p(y_1,\ldots,y_n \mid \theta,\sigma^2)p(\theta,\sigma^2)}{p(y_1,\ldots,y_n)} \end{equation}
where $y_1,\ldots,y_n$, represent the data. Since the joint distribution for two quantities can be expressed as the product of a conditional probability and a marginal probability, the posterior distribution can likewise be decomposed (Eq. \ref{eq2}):
\begin{equation}\label{eq2} p(\theta, \sigma^2 \mid y1, \ldots, y_n) = p(\theta \mid \sigma^2, y_1, \ldots, y_n) p(\sigma^2 \mid y_1, \ldots, y_n) \end{equation}
where the first part of the equation is the conditional probability of $\theta$ on the variance and the data; and the second part is the marginal distribution of $\sigma^2$. The conditional probability part of the equation can be determined as a normal distribution:
\begin{equation} \{\theta \mid y_1, \ldots, y_n, \sigma^2\} \sim normal(\mu_n, \sigma^2 / \kappa_n) \end{equation}
Where $\kappa_n=\kappa_0 + n$ represents the degrees of freedom (df) as the sum of the prior df ($\kappa_0$) and that from the data (n). $\mu_n$ is given by: $\mu_n = \frac{(\kappa_0 / \sigma^2)\mu_0 + (n/\sigma^2) \overline{y}}{\kappa_0 / \sigma^2 + n/\sigma^2}=\frac{\kappa_0 \mu_0 + n \overline{y}}{\kappa_n}$, where $\overline{y}$ is the sample mean taken from the WRF simulation. The prior mean is given by $\mu_0$. The calculation of $\sigma^2$ is explained next.
The second part of equation \ref{eq2}, the marginal distribution of $\sigma^2$, can be obtained by integrating over the unknown value of the mean, $\theta$, as follows:
\begin{eqnarray} p(\sigma^2 \mid y_1, \ldots, y_n) \propto p(\sigma^2) p(y_1, \ldots, y_n \mid \sigma^2) \\ =p(\sigma^2) \int p(y_1, \ldots, y_n \mid \theta, \sigma^2) p(\theta \mid \sigma^2) d\theta \end{eqnarray}
Solving the integral, and considering the precision ($1/\sigma^2$) such that the distribution is conjugate, gives the following gamma distribution:
\begin{equation} \{ 1/\sigma^2 \mid y_1, \ldots, y_n \} \sim gamma(\nu_n/2, \nu_n \sigma_n^2/2) \end{equation}
Where $\nu_n = \nu_0 +n$ is the sum of degrees of freedom of the prior ($\nu_0$) and of the data (n). $\sigma_n^2$ is given by $\sigma_n^2 = \frac{1}{\nu_n}[\nu_0 \sigma_0^2 + (n-1) s^2 + \frac{\kappa_0 n}{\kappa_n} (\overline{y} - \mu_0)^2]$, where $\overline{y}$ is the sample mean and $s^2$ is the sample variance, both taken from the WRF simulation. $\sigma_0^2$ is the prior variance.
\subsection{Monte Carlo sampling}
Samples of $\theta$ and $\sigma^2$ can be generated from their joint posterior distribution using the following Monte Carlo procedure \citep{hoff2009}:
\begin{equation*} \sigma^{2(1)} \sim inv\; gamma(\frac{\nu_n}{2}, \frac{\sigma^2_n \nu_n}{2}), \quad \theta^{(1)} \sim normal(\mu_n, \frac{\sigma^{2(1)}}{\kappa_n}) \\ \end{equation*} $\vdots\\$ \begin{equation*} \sigma^{2(S)} \sim inv\; gamma(\frac{\nu_n}{2}, \frac{\sigma^2_n \nu_n}{2}), \quad \theta^{(S)} \sim normal(\mu_n, \frac{\sigma^{2(S)}}{\kappa_n}) \end{equation*}
where $\sigma^2$ is estimated using an inverse-gamma distribution ($inv\;gamma$). Each $\theta^{(S)}$ is sampled from its conditional distribution given the data and $\sigma^2=\sigma^{2(S)}$. The simulated pairs of $\{(\sigma^{2(1)}, \theta^{(1)}), \ldots, (\sigma^{2(S)}, \theta^{(S)}) \}$ are independent samples of the joint posterior distribution, i.e.: $p(\theta, \sigma^2 \mid y_1, \ldots, y_n)$. The simulated sequence $\{\theta^{(1)}, \ldots, \theta^{(S)}\}$ can be seen as independent samples from the marginal posterior distribution of $p(\theta \mid y_1, \ldots, y_n)$, and so this sequence can be used to make Monte Carlo approximations to functions involving $p(\theta \mid y_1, \ldots, y_n)$. While $\theta^{(1)}, \ldots, \theta^{(S)}$ are each conditional samples, they are also each conditional on different values of $\sigma^2$. Together, they make up marginal samples of $\theta$.
\section{Results}
Monte Carlo samples from the joint distributions of the population mean and variance are shown in Figure \ref{f2}. The ERA Interim distribution (ERAi), on the top left, shows larger spread both for the mean and the variance as compared to the three domains. The distribution for the 9 km domain seems to be off and does not match the ERA Interim data. The 3 km nest shows the closest approximation to the mean of the ERA Interim, whereas the 1 km nest approximates the variance more closely.
\begin{figure}
\caption{Monte Carlo samples from the joint distributions of the population mean ($\theta$) and variance ($\sigma^2$) for ERA Interim (ERAi) and for the different domains. The values in black show the mean value of the population mean (right side) and of the population variance (left side). Accordingly, the mean values of $\theta$ and $\sigma^2$ for the ERA Interim are indicated in red. Temperature given in degrees Celsius.}
\label{f2}
\end{figure}
Figure \ref{f3} shows the marginal distribution of the mean, based on the Monte Carlo sampling. The red line indicates the mean value of the marginal distribution for the ERA Interim. The posterior bounds of the 9 km parent domain do not contain the mean value of the ERA Interim. Table \ref{t1} shows that even though there is some overlap between the posterior bounds of ERA Interim and the 9 km domain, this overlap is minimum. The 3 km and 1 km nests show a closer overlap with the ERA Interim data. The 3 km resolution domain is able to approximate the mean more realistically, also confirmed by the posterior bound overlap with ERA Interim (Table \ref{t1}).
\begin{figure}
\caption{Monte Carlo samples from the marginal distribution of $\theta$ for ERA Interim (ERAi) and for the different domains. The blue vertical lines give a 95\% quantile-based posterior bound. In red, the mean value of the ERA Interim posterior marginal distribution. Temperature given in degrees Celsius.}
\label{f3}
\end{figure}
\begin{table}[t] \caption{Posterior distribution summary for the mean ($\theta$) and variance ($\sigma^2$) based on Monte Carlo sampling. The 95\% posterior bound (PB) is also indicated for each variable. Temperature units given in degrees Celsius.}\label{t1} \begin{center} \begin{tabular}{cccccrcrc} \hline\hline
& $\theta$ & $\theta$ PB & $\sigma^2$ & $\sigma^2$ PB\\ \hline
ERAi & 4.26 & (3.87, 4.66) & 9.90 & (8.30, 11.93) \\
d01 & 4.80 & (4.57, 5.04) & 7.08 & (6.26, 8.07) \\
d02 & 4.19 & (3.93, 4.44) & 8.04 & (7.12, 9.14) \\
d03 & 4.56 & (4.29, 4.83) & 9.19 & (8.11, 10.46) \\ \hline \end{tabular} \end{center} \end{table}
The marginal distribution of the ERA Interim variance is approximated more closely by the 1 km resolution domain, as shown in Figure \ref{f4}. The mean value of the ERA Interim marginal distribution is within the posterior bounds for that resolution. In contrast, the 9 km and 3 km domains have posterior bounds outside of the ERA Interim mean value. There is, however, a better overlap between the ERA Interim and the 3 km posterior distribution, compared to the 9 km one (Table \ref{t1}).
\begin{figure}
\caption{The same as Figure \ref{f3}, but for the precision, $1/\sigma^2$.}
\label{f4}
\end{figure}
\section{Conclusion}
This study has used a Bayesian statistical model applied to output data from the Weather Research and Forecasting (WRF) model at three horizontal resolutions (9, 3 and 1 km). Station-based observational data was used to provide an informative prior. We have presented a fresh perspective on the assessment of data from the WRF model, related more specifically to: a) the value added by increased horizontal resolution; and b) a new method for comparing sensitivity studies.
The increased horizontal resolution is able to approximate the mean and the variance of the observations more closely. This approximation is crucial, for example, when one is to use these data to force a regional oceal model \citep{myksvolletal2012}. The Bayesian method introduced here provides a richer probabilistic view of the dataset. It also obviates the use of long simulations for estimating the population mean or variance - thus saving computational resources. In high-resolution experiments such as this, one is constrained by the amount of computational resources used. If one is to use standard statistics, a larger sample is needed to be able to make robust inferences. Hence, through the use of prior information, the Bayesian framework provides an alternative approach to estimating the statistical population, and in this case, for assessing the bias in the model simulation. It is also useful for sensitivity studies where one needs to compare not only resolution, but also the use of different parameterization schemes. This approach can also be applied to other variables by adapting it to their underlying distribution.
\begin{acknowledgment} We would like to thank NCAR for making the WRF model publicly available. We also thank ECMWF and the Norwegian Meteorological Institute for the datasets provided. This study has been funded through the Downscaling Synthesis project at the Bjerknes Centre for Climate Research, Bergen, Norway. \end{acknowledgment}
\ifthenelse{\boolean{dc}} {} {
}
\end{document} | arXiv |
Yuktibhāṣā, 16th century, first modern proof of $\frac{\pi}4=\int_0^1 \frac{dt}{1+t^2}=\sum_{n\ge 0} \frac{(-1)^n}{2n+1}$
It is an Indian (Kerala) text, it would be the first modern proof (based on earlier knowledge) of
$$\frac{\pi}4=\int_0^1 \frac{dt}{1+t^2}=\sum_{n\ge 0} \frac{(-1)^n}{2n+1}$$ The integral would be a Riemann sum.
3 references, giving 3 different dates:
wikipedia/Yuktibhāṣā 1530
The First Textbook of Calculus: "Yuktibhāṣā" 1555
The Discovery of the Series Formula for π by Leibniz, Gregory and Nilakantha circa 1600
Additionally to the date problem, can we decipher at least a small part of the original text to make ourselves an idea of its content?
It bears repetition that, as in the excerpted passages, so in the entire text, no symbols are employed to represent the mathematical objects being manipulated, no formal notation for relations among them and operations on them, no diagrammatic guide to the geometric constructions invoked.
There is an English translation of the text, but it is really hard to follow, not made for mathematicians: Ganita-Yukti-Bhāṣā (Rationales in Mathematical Astronomy) of Jyeṣṭhadeva.
Edit: I can't find the proof of $\frac{\pi}4=\int_0^1 \frac{dt}{1+t^2}$ and of $\int_0^1 \frac{dt}{1+t^2}=\sum_{n\ge 0} \frac{(-1)^n}{2n+1}$ in this translation. So this may be the main question: where is the proof? See this excerpt, it is more a cooking recipe as an attempt to define the Leibniz series than rigorous mathematics
In the translation Rsine means $R \sin(\theta)$ (with $R$ the radius) and Rversine means $R(1-\cos(\theta))$
mathematics calculus ancient-india
reuns
reunsreuns
$\begingroup$ The third link is: R. Roy, Ranjan, "The discovery of the series formula for π by Leibniz, Gregory and Nilakantha." Math. Mag. 63 (1990), no. 5, 291–306. $\endgroup$
– Gerald Edgar
$\begingroup$ from that we get: Nilakantha's results were presented in his Tantrasangraha, composed in Sanskrit verse around 1500. An anonymous commentary entitled Tantrasangraha-vakhya then appeared, and a century later Jyesthadeva (c. 1500–c. 1610) published a commentary entitled Yuktibhasa that contained proofs of the earlier results. The material in the Tantrasangraha itself seems to have been the earlier work of Madhava, a mathematician who lived from 1340 to 1425 in Kerala, the southwest coast of India. $\endgroup$
$\begingroup$ It is not surprising that the sought proofs are not there considering that the Kerala school did not operate with any version of integrals. How Nilakantha (who attributes it to Madhava) geometrically derived the power series for $\arctan$, from which the formula for $\pi$ follows, is described at the end of Roy's paper linked in the OP, here is a non-paywalled version. It does not involve any integrals. $\endgroup$
$\begingroup$ @Conifold Eq. 14 is an integral, and I hardly see how to get the Leibniz series without $\pi/4=\int_0^1 \frac{dt}{1+t^2}$. Roy is the 3rd reference I mentioned, I explained what is the problem in my question: people are often overinterpreting when describing history of maths, and for the Kerala school nobody is really explaining what we can find in the texts, ie. is it formulas out of nowhere (cooking recipes), or are there some maths (eg. proofs). $\endgroup$
– reuns
$\begingroup$ $\pi/4=\int_0^1 \frac{dt}{1+t^2}$ follows from a purely geometric argument, adding the chords lengths of some small sectors whose sides have tangent $t$ and $t+h$, this chord length is $\approx \frac{h}{1+t^2}$. This comment is the kind of proof I want to find. $\endgroup$
Since the Yuktibhasa (The Rationale) was written in Malayam in 1530, by Jyesthadeva, a Keralan astronomer, it will be easily translatable into English. The question is finding someone to finance such a venture.
Personally, I think that as India becomes more aware of its own scientific heritage instead of just catching up with the West (and according to Mathilde Marcolli, a geometer, the best of their institutes compete with the best of Western institutes) then it will become more likely that we will see such translations.
Mozibur UllahMozibur Ullah
$\begingroup$ There is a supposed English translation of the text "Ganita-Yukti-Bhāṣā (Rationales in Mathematical Astronomy) of Jyeṣṭhadeva" (on libgen) but it is hard to follow, and it is not giving the modern mathematical context to make it easy to follow it by mathematicians (for the Leibniz series part, which would be its main scientific achievement). I don't think the date 1530 is accurate. $\endgroup$
$\begingroup$ @reuns: The date is sourced from Wikipedia which refers to the book, Yuktibhasa of Jyesthadeva: A Book on Indian Rationales in Indian Mathematics and Astronomy, an Analytic Appraisal. You might want to see what they have to say. $\endgroup$
– Mozibur Ullah
$\begingroup$ It is there see the introduction mentioning 1500-1610 $\endgroup$
$\begingroup$ @reuns: Well, 1530 is within that range ... $\endgroup$
Not the answer you're looking for? Browse other questions tagged mathematics calculus ancient-india or ask your own question.
Was 18th century algebra more symbolic/formal than the modern conception?
With what kind of proof was the Binet formula derived for the first time?
Is it true that Euler didn't prove $\sum_{n=1}^{\infty}\frac{1}{n^2}=\frac{\pi^2}{6}$?
Who first realized that $\int \frac 1x dx =\ln(x)+c$?
First appearance of modern definition of a group
First proof that circumference/diameter $=\pi$
Euler's first proof of $e^{ix}=\cos(x)+i\sin(x)$
First evaluation of $\sum_{n \geq 1} 1/n^2$ by Fourier series
Inscriptions on a 16th century 3-dimensional permutahedron sundial?
Who first identified $\frac{n}{\ln(n)}$ as an approximation of a prime counting function? | CommonCrawl |
Problems in Mathematics
Problems by Topics
Gauss-Jordan Elimination
Inverse Matrix
Linear Transformation
Eigen Value
Cayley-Hamilton Theorem
Diagonalization
Exam Problems
Group Homomorphism
Sylow's Theorem
Module Theory
LaTex/MathJax
Login/Join us
Solve later Problems
My Solved Problems
You solved 0 problems!!
Solved Problems / Solve later Problems
Tagged: Ohio State.LA
by Yu · Published 02/13/2017 · Last modified 08/11/2017
Express a Vector as a Linear Combination of Given Three Vectors
Problem 298
\[\mathbf{v}_1=\begin{bmatrix}
1 \\
\end{bmatrix}, \mathbf{v}_2=\begin{bmatrix}
\end{bmatrix}, \mathbf{b}=\begin{bmatrix}
13 \\
\end{bmatrix}.\] Express the vector $\mathbf{b}$ as a linear combination of the vector $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$.
(The Ohio State University, Linear Algebra Midterm Exam Problem)
Read solution
Click here if solved 64
Add to solve later
Compute and Simplify the Matrix Expression Including Transpose and Inverse Matrices
Let $A, B, C$ be the following $3\times 3$ matrices.
\[A=\begin{bmatrix}
1 & 2 & 3 \\
4 &5 &6 \\
7 & 8 & 9
\end{bmatrix}, B=\begin{bmatrix}
\end{bmatrix}, C=\begin{bmatrix}
-1 & 0\ & 1 \\
\end{bmatrix}.\] Then compute and simplify the following expression.
\[(A^{\trans}-B)^{\trans}+C(B^{-1}C)^{-1}.\]
Solve the System of Linear Equations and Give the Vector Form for the General Solution
Solve the following system of linear equations and give the vector form for the general solution.
\begin{align*}
x_1 -x_3 -2x_5&=1 \\
x_2+3x_3-x_5 &=2 \\
2x_1 -2x_3 +x_4 -3x_5 &= 0
\end{align*}
The Possibilities For the Number of Solutions of Systems of Linear Equations that Have More Equations than Unknowns
Determine all possibilities for the number of solutions of each of the system of linear equations described below.
(a) A system of $5$ equations in $3$ unknowns and it has $x_1=0, x_2=-3, x_3=1$ as a solution.
(b) A homogeneous system of $5$ equations in $4$ unknowns and the rank of the system is $4$.
Quiz 4: Inverse Matrix/ Nonsingular Matrix Satisfying a Relation
(a) Find the inverse matrix of
\end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason.
(b) Find a nonsingular $2\times 2$ matrix $A$ such that
\[A^3=A^2B-3A^2,\] where
\[B=\begin{bmatrix}
4 & 1\\
2& 6
\end{bmatrix}.\] Verify that the matrix $A$ you obtained is actually a nonsingular matrix.
Quiz 3. Condition that Vectors are Linearly Dependent/ Orthogonal Vectors are Linearly Independent
(a) For what value(s) of $a$ is the following set $S$ linearly dependent?
\[ S=\left \{\,\begin{bmatrix}
\end{bmatrix}, \begin{bmatrix}
a \\
-1 \\
a^2 \\
a^3
\end{bmatrix} \, \right\}.\]
(b) Let $\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ be a set of nonzero vectors in $\R^m$ such that the dot product
\[\mathbf{v}_i\cdot \mathbf{v}_j=0\] when $i\neq j$.
Prove that the set is linearly independent.
Quiz 2. The Vector Form For the General Solution / Transpose Matrices. Math 2568 Spring 2017.
(a) The given matrix is the augmented matrix for a system of linear equations.
Give the vector form for the general solution.
\[ \left[\begin{array}{rrrrr|r}
1 & 0 & -1 & 0 &-2 & 0 \\
0 & 1 & 2 & 0 & -1 & 0 \\
0 & 0 & 0 & 1 & 1 & 0 \\
\end{array} \right].\]
(b) Let
4 &5 &6
\end{bmatrix}, \mathbf{v}=\begin{bmatrix}
\[\mathbf{v}^{\trans}\left( A^{\trans}-(A-B)^{\trans}\right)C.\]
Quiz 1. Gauss-Jordan Elimination / Homogeneous System. Math 2568 Spring 2017.
(a) Solve the following system by transforming the augmented matrix to reduced echelon form (Gauss-Jordan elimination). Indicate the elementary row operations you performed.
x_1+x_2-x_5&=1\\
x_2+2x_3+x_4+3x_5&=1\\
x_1-x_3+x_4+x_5&=0
(b) Determine all possibilities for the solution set of a homogeneous system of $2$ equations in $2$ unknowns that has a solution $x_1=1, x_2=5$.
Eigenvalues of a Hermitian Matrix are Real Numbers
Show that eigenvalues of a Hermitian matrix $A$ are real numbers.
(The Ohio State University Linear Algebra Exam Problem)
Maximize the Dimension of the Null Space of $A-aI$
\[ A=\begin{bmatrix}
5 & 2 & -1 \\
-1 & 2 & 5
\end{bmatrix}.\]
Pick your favorite number $a$. Find the dimension of the null space of the matrix $A-aI$, where $I$ is the $3\times 3$ identity matrix.
Your score of this problem is equal to that dimension times five.
(The Ohio State University Linear Algebra Practice Problem)
Given All Eigenvalues and Eigenspaces, Compute a Matrix Product
Let $C$ be a $4 \times 4$ matrix with all eigenvalues $\lambda=2, -1$ and eigensapces
\[E_2=\Span\left \{\quad \begin{bmatrix}
\end{bmatrix} \quad\right \} \text{ and } E_{-1}=\Span\left \{ \quad\begin{bmatrix}
\end{bmatrix},\quad \begin{bmatrix}
\end{bmatrix} \quad\right\}.\]
Calculate $C^4 \mathbf{u}$ for $\mathbf{u}=\begin{bmatrix}
\end{bmatrix}$ if possible. Explain why if it is not possible!
Click here if solved 8
Linear Transformation and a Basis of the Vector Space $\R^3$
Let $T$ be a linear transformation from the vector space $\R^3$ to $\R^3$.
Suppose that $k=3$ is the smallest positive integer such that $T^k=\mathbf{0}$ (the zero linear transformation) and suppose that we have $\mathbf{x}\in \R^3$ such that $T^2\mathbf{x}\neq \mathbf{0}$.
Show that the vectors $\mathbf{x}, T\mathbf{x}, T^2\mathbf{x}$ form a basis for $\R^3$.
Subspace of Skew-Symmetric Matrices and Its Dimension
Let $V$ be the vector space of all $2\times 2$ matrices. Let $W$ be a subset of $V$ consisting of all $2\times 2$ skew-symmetric matrices. (Recall that a matrix $A$ is skew-symmetric if $A^{\trans}=-A$.)
(a) Prove that the subset $W$ is a subspace of $V$.
(b) Find the dimension of $W$.
A Matrix Representation of a Linear Transformation and Related Subspaces
Let $T:\R^4 \to \R^3$ be a linear transformation defined by
\[ T\left (\, \begin{bmatrix}
x_1 \\
x_4
\end{bmatrix} \,\right) = \begin{bmatrix}
x_1+2x_2+3x_3-x_4 \\
3x_1+5x_2+8x_3-2x_4 \\
x_1+x_2+2x_3
(a) Find a matrix $A$ such that $T(\mathbf{x})=A\mathbf{x}$.
(b) Find a basis for the null space of $T$.
(c) Find the rank of the linear transformation $T$.
Inner Product, Norm, and Orthogonal Vectors
Let $\mathbf{u}_1, \mathbf{u}_2, \mathbf{u}_3$ are vectors in $\R^n$. Suppose that vectors $\mathbf{u}_1$, $\mathbf{u}_2$ are orthogonal and the norm of $\mathbf{u}_2$ is $4$ and $\mathbf{u}_2^{\trans}\mathbf{u}_3=7$. Find the value of the real number $a$ in $\mathbf{u_1}=\mathbf{u_2}+a\mathbf{u}_3$.
(The Ohio State University, Linear Algebra Exam Problem)
Express a Vector as a Linear Combination of Other Vectors
Express the vector $\mathbf{b}=\begin{bmatrix}
\end{bmatrix}$ as a linear combination of the vectors
\end{bmatrix},
\mathbf{v}_2=
\begin{bmatrix}
(The Ohio State University, Linear Algebra Exam)
Compute the Product $A^{2017}\mathbf{u}$ of a Matrix Power and a Vector
-1 & 2 \\
0 & -1
\end{bmatrix} \text{ and } \mathbf{u}=\begin{bmatrix}
\end{bmatrix}.\] Compute $A^{2017}\mathbf{u}$.
10 True or False Problems about Basic Matrix Operations
Test your understanding of basic properties of matrix operations.
There are 10 True or False Quiz Problems.
These 10 problems are very common and essential.
So make sure to understand these and don't lose a point if any of these is your exam problems.
(These are actual exam problems at the Ohio State University.)
You can take the quiz as many times as you like.
The solutions will be given after completing all the 10 problems.
Click the View question button to see the solutions.
Possibilities For the Number of Solutions for a Linear System
Determine whether the following systems of equations (or matrix equations) described below has no solution, one unique solution or infinitely many solutions and justify your answer.
(a) \[\left\{
\begin{array}{c}
ax+by=c \\
dx+ey=f,
\end{array}
\right.
\] where $a,b,c, d$ are scalars satisfying $a/d=b/e=c/f$.
(b) $A \mathbf{x}=\mathbf{0}$, where $A$ is a singular matrix.
(c) A homogeneous system of $3$ equations in $4$ unknowns.
(d) $A\mathbf{x}=\mathbf{b}$, where the row-reduced echelon form of the augmented matrix $[A|\mathbf{b}]$ looks as follows:
\[\begin{bmatrix}
1 & 0 & -1 & 0 \\
0 &1 & 2 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.\] (The Ohio State University, Linear Algebra Exam)
Quiz: Possibilities For the Solution Set of a Homogeneous System of Linear Equations
4 multiple choice questions about possibilities for the solution set of a homogeneous system of linear equations.
The solutions will be given after completing all problems.
This website's goal is to encourage people to enjoy Mathematics!
This website is no longer maintained by Yu. ST is the new administrator.
Linear Algebra Problems by Topics
The list of linear algebra problems is available here.
Elementary Number Theory (1)
Field Theory (27)
Group Theory (126)
Linear Algebra (485)
Math-Magic (1)
Module Theory (13)
Ring theory (67)
Mathematical equations are created by MathJax. See How to use MathJax in WordPress if you want to write a mathematical blog.
How to Prove Markov's Inequality and Chebyshev's Inequality
How to Use the Z-table to Compute Probabilities of Non-Standard Normal Distributions
Expected Value and Variance of Exponential Random Variable
Condition that a Function Be a Probability Density Function
Conditional Probability When the Sum of Two Geometric Random Variables Are Known
The Set of Square Elements in the Multiplicative Group $(\Zmod{p})^*$
Linear Properties of Matrix Multiplication and the Null Space of a Matrix
Compute Power of Matrix If Eigenvalues and Eigenvectors Are Given
Sequences Satisfying Linear Recurrence Relation Form a Subspace
Determinant of a General Circulant Matrix
How to Diagonalize a Matrix. Step by Step Explanation.
Eigenvalues of Real Skew-Symmetric Matrix are Zero or Purely Imaginary and the Rank is Even
Orthonormal Basis of Null Space and Row Space
Determine Whether Each Set is a Basis for $\R^3$
Eigenvalues of a Matrix and its Transpose are the Same
Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue
Find the Inverse Matrix Using the Cayley-Hamilton Theorem
The Intersection of Two Subspaces is also a Subspace
Diagonalize a 2 by 2 Matrix $A$ and Calculate the Power $A^{100}$
Find a Basis and the Dimension of the Subspace of the 4-Dimensional Vector Space
Site Map & Index
abelian group augmented matrix basis basis for a vector space characteristic polynomial commutative ring determinant determinant of a matrix diagonalization diagonal matrix eigenvalue eigenvector elementary row operations exam finite group group group homomorphism group theory homomorphism ideal inverse matrix invertible matrix kernel linear algebra linear combination linearly independent linear transformation matrix matrix representation nonsingular matrix normal subgroup null space Ohio State Ohio State.LA probability rank ring ring theory subgroup subspace symmetric matrix system of linear equations transpose vector vector space
Search More Problems
Membership Level Free
If you are a member, Login here.
Problems in Mathematics © 2021. All Rights Reserved. | CommonCrawl |
\begin{document}
\title{Ranked Sparsity: A Cogent Regularization Framework for Selecting and
Estimating Feature Interactions and Polynomials}
\noindent Ryan A. Peterson, Joseph E. Cavanaugh \newline
\noindent \textbf{Abstract} \newline
\noindent We explore and illustrate the concept of ranked sparsity, a phenomenon that often occurs naturally in modeling applications when an expected disparity exists in the quality of information between different feature sets. Its presence can cause traditional and modern model selection methods to fail because such procedures commonly presume that each potential parameter is equally worthy of entering into the final model -- we call this presumption ``covariate equipoise''. However, this presumption does not always hold, especially in the presence of derived variables. For instance, when all possible interactions are considered as candidate predictors, the premise of covariate equipoise will often produce over-specified and opaque models. The sheer number of additional candidate variables grossly inflates the number of false discoveries in the interactions, resulting in unnecessarily complex and difficult-to-interpret models with many (truly spurious) interactions. We suggest a modeling strategy that requires a stronger level of evidence in order to allow certain variables (e.g.~interactions) to be selected in the final model. This ranked sparsity paradigm can be implemented with the sparsity-ranked lasso (SRL). We compare the performance of SRL relative to competing methods in a series of simulation studies, showing that the SRL is a very attractive method because it is fast, accurate, and produces more transparent models (with fewer false interactions). We illustrate its utility in an application to predict the survival of lung cancer patients using a set of gene expression measurements and clinical covariates, searching in particular for gene-environment interactions.\newline
\noindent \textbf{Keywords}: derived variables, feature selection, information, lasso, model selection \newline
\noindent \textbf{Declarations}: Not applicable. \newline
\noindent \textbf{Date last modified}: 12/06/2021 \newline
\noindent \textbf{Availability of data and material}: Data used in this application is publicly available via GEO database accession number GSE68465. \newline
\noindent \textbf{Code availability}: Code for simulations and methods is included as supplemental material.\newline
\begin{center}\rule{0.5\linewidth}{0.5pt}\end{center}
\noindent Ryan Peterson \newline Department of Biostatistics \& Informatics, University of Colorado School of Public Health, Aurora, CO \newline email: [email protected] (corresponding author) \newline
\noindent Joseph Cavanaugh \newline Department of Biostatistics, University of Iowa College of Public Health, Iowa City, IA \newline email: [email protected] \newline
\doublespacing
\hypertarget{introduction}{ \section{Introduction}\label{introduction}}
In the ever-growing, ever-changing field of model selection and machine learning, ``black-box'' predictive models (e.g.~neural networks) have become increasingly popular (and increasingly opaque). When one's exclusive desire is predictive accuracy, these difficult-to-interpret models are often worth a certain lack of understanding. However, overly complex predictive contexts are not generally compatible with the traditional aim of science: to explain and to understand the world in which we live. With their growing popularity, black-box models are starting to be applied in situations where explanation \emph{should} be the primary goal. Worse yet, in some circumstances, there is little regard for the consideration that more transparent models could produce similar prediction results. So, as scientists continually increase the number of candidate predictors, those building models are using increasingly complicated functions of candidate predictors in order to optimize for predictive performance above all else. Is there a justifiable way to hold on to the traditional aims of science amid these trends?
In this paper, we will argue that the benefits reaped from choosing a black-box model must be weighed against the interpretative costs of a lack of scientific understanding. However, before we can proffer a method to accomplish this goal, we must first answer a salient question -- why do black-box methods outperform transparent models in prediction? The answer is difficult because these black-box methods are diverse, as are the situational considerations that make a particular method perform better or worse. Broadly speaking, the benefits of black-box methods can be roughly explored by investigating situations where transparent linear models fail. We will focus in particular on the issue of bias caused by model misspecification.
Say we have a vector of data made up of a response of interest \(\boldsymbol y\) and a matrix of covariates (or predictors) \(X\), some columns of which are related to \(\boldsymbol y\) while others are not. Suppose that the variates comprising \(\boldsymbol y\) are independent (conditional on \(X\)) and can be conceptualized as following a distribution in the exponential family. One can envision many ways of fitting an optimal predictive model to \(\boldsymbol y\), but a popular method (if transparency is a goal) is to fit generalized linear models (GLMs) based on all possible subsets of covariates to select the best model on the basis of an information criterion. This method is somewhat limited to lower-dimensional settings, because the number of candidate models increases combinatorically with the dimension of \(X\). However, in recent times the Least Absolute Shrinkage and Selection Operator (the lasso) has changed the landscape surrounding the problem of identifying a suitable predictive model (Tibshirani, 1996). With the lasso and its many extensions, it is possible to have an extremely high-dimensional covariate space and still end up with a relatively well-fit, interpretable model. In either setting, if the true generating model has informative interactions among covariates and/or meaningful nonlinear covariate-response relationships, black-box methods exist that can outperform even the best traditional or lasso model (given enough data). This is because none of the candidate main effect (i.e.~transparent) models can capture the model's complex interaction/polynomial terms; all of the candidate transparent models are misspecified.
One potential solution, given the power and flexibility of the lasso, would be simply to add ``derived variables'' of \(X\), such as interactions and polynomials, into a new (potentially very large) design matrix. The lasso \emph{can} simultaneously select and estimate important interactions and polynomials even in this ultra-high dimensional setting. However, in this paper, we will show that this method yields a preponderance of both false and missed discoveries, unless proper methods are used to incorporate what we call ``ranked sparsity.''
The paper is organized as follows. First, we intuitively motivate the concept of ranked sparsity, illustrating its necessity when looking for active interactions. Second, we propose the sparsity-ranked lasso, and connect it to some other related concepts and regularization methods that have been proposed in the literature, as well as other state-of-the-art interaction selection methods. Next, we present simulation studies to investigate the performance of the SRL compared to competing methods in the polynomial and interaction selection setting. We then apply the SRL in a high-dimensional setting of gene-environment interaction selection in the context of a lung cancer application. Finally, we discuss the strengths and weaknesses of the SRL relative to other strategies that have been proposed.
\hypertarget{ranked-sparsity}{ \section{Ranked Sparsity}\label{ranked-sparsity}}
\hypertarget{intuitive-motivation}{ \subsection{Intuitive Motivation}\label{intuitive-motivation}}
Ranked sparsity, which we also refer to as ranked skepticism, is a philosophical framework that challenges the traditional implementation of Occam's Razor in the context of variable selection. In Einstein's words\footnote{Debate exists regarding whether this is a true quote or a paraphrase of Einstein.}, the maxim stipulates that ``everything must be made as simple as possible, but not simpler.'' This is a noble goal, but some obvious questions arise: how do we know when a model is as simple as it should be? How should we measure simplicity in the first place? Specifically, we wish to challenge the ubiquitous answers to these questions in the field of model selection, which rely on a presumption that we call ``covariate equipoise'': the prior belief that all covariates are equally likely to enter into a model. To illustrate this idea, say we are trying to find a well-fit model to predict an outcome \(\boldsymbol y\) using a set of covariates, including age, weight, and height. Of the candidate models below, which is ``simpler''? \begin{align} E(y_i) &= \beta_0 + \beta_1 \text{Age}_i + \beta_2 \text{Weight}_i + \beta_3 \text{Age}_i*\text{Weight}_i \label{eq:a}\\ E(y_i) &= \beta_0 + \beta_1 \text{Age}_i + \beta_2 \text{Weight}_i + \beta_3 \text{Height}_i \label{eq:b} \end{align} Virtually all variable selection tools assume these two models to be equally simple. Due to the presumption of covariate equipoise, simplicity is equated to parsimony, and is measured only by the number of parameters in the model (which is 4 in both models). However, any statistician would quickly recognize that model \eqref{eq:b} is an order of magnitude easier to understand and communicate than model \eqref{eq:a}. We argue that a proposed model's simplicity should therefore not only be tied to its level of parsimony, but also to its transparency as measured by the ease at which it can be understood and communicated (a metric loosely tied to the number of interactions and nonlinear terms in the model). Ultimately, a good model interpreted correctly is better than a great model interpreted erroneously. This concept is the primary motivation for the ranked sparsity methods we introduce in this paper, which provide a means of searching for important interactions/nonlinear terms without rendering the chosen model unnecessarily opaque.
In this work, for the linear form used to characterize the mean outcome, we use the term ``main effects'' to refer to the regression coefficients on the original covariates of interest, and ``interaction effects'' to refer to coefficients on the product of covariates that correspond to the main effects. We also define the ``sparsity level'' as the proportion of candidate variables (a.k.a. features, covariates, or predictors) that are inactive in a given true generating model. A high sparsity level thus indicates that a smaller proportion of candidate variables are truly important. Conversely, the ``saturation level'' is defined as the proportion of candidate variables that are active. The sparsity level is governed by a mix of what cannot be known about nature's true generating model and what can (sometimes) be known about the ambition of a particular scientific project. In sparse settings, consistent model selection criteria such as the Bayesian Information Criterion (BIC) (Schwarz, 1978) and its extensions (Bogdan et al., 2008; Chen and Chen, 2008) have been shown to be effective, while in saturated settings, efficient criteria such as Cp, AIC, and corrected AIC (Mallows, 1973; Akaike, 1974; Hurvich and Tsai, 1989) perform relatively well. This difference in the performance of various model selection criteria suggests that in settings where multiple levels of sparsity are to be expected among different groups of covariates, the optimal criterion needs to account for this disparity in some way by penalizing the covariates differently. The ``ranking'' that occurs in our concept of ranked sparsity thus refers to settings where the sparsity levels within covariate groups are expected to be ordered in a specific way \emph{a priori}.
With this impetus in mind, given a saturation level in the main effects, we can show that the maximum saturation level attainable for the (first-order) interaction effects is limited by ``hierarchy'' assumptions about the true generating model. Sometimes called model heredity or the marginality principle, model hierarchy refers to the rules pertaining to which interactions can be nonzero, and it is typically broken down into ``strong,'' e.g. \(E(y_i) = \beta_0 + \beta_1 x_{1i}+ \beta_2 x_{2i} + \beta_3 x_{1i}*x_{2i}\); ``weak,'' e.g. \(E(y_i) = \beta_0 + \beta_1 x_{1i}+ \beta_3 x_{1i}*x_{2i}\); and ``anti-'' (or ``non-'') hierarchical models, e.g. \(E(y_i) = \beta_0 + \beta_3 x_{1i}*x_{2i}\). As an illustration of how hierarchy limits saturation, consider a case where only 3 of 30 possible main effects are active, then strong hierarchy would dictate that in the generating model, only \(\binom{3}{2} = 3\) signal variables can exist in the interaction set. Under weak hierarchy, then this quantity is limited to \(\sum_{j=1}^3 (30 - j) = 84\) active interactions. In either case, the number of signals in the interaction set is bounded by the number of signal variables in the main effects, as are their saturation (sparsity) levels (see supplemental materials for proofs). Of course, outside of simulation settings, the hierarchy status of the generating model will be unknown. Even so, there is good reason to believe that the saturation level in one set of covariates (interaction effects) is going to be less than the saturation level in another group of covariates (main effects). As a result, it becomes necessary to account for this disparity somehow in the model selection process; we cannot simply apply the same penalty to both main effects and interactions and expect optimal performance.
\hypertarget{the-sparsity-ranked-lasso}{ \subsection{The Sparsity-Ranked Lasso}\label{the-sparsity-ranked-lasso}}
In this section, we propose and motivate the Sparsity-Ranked Lasso (SRL) as a tool for implementing ranked sparsity in the search for important derived variables of a feature space. Suppose we have \(p\) features \(\left[x_1, x_2, ..., x_p\right] = X_{n \text x p}\), and a centered response variable \(\boldsymbol y\); some (but not all) features are related to \(\boldsymbol y\). For this section, we assume that the variates comprising \(\boldsymbol y\) are normally distributed and conditionally independent given \(X\); however, it will become clear that the concept and development apply more generally in the GLM family. The lasso (Tibshirani, 1996) has become immensely popular in this setting for its computational efficiency and its effectiveness in variable selection. The lasso simultaneously estimates coefficients for each of the \(p\) features and selects from them, such that they are either ``active'' (i.e. \(\hat \beta_j \neq 0\)), or ``inactive'' (\(\hat \beta_j = 0\)). The estimated nonzero coefficients do suffer from a bias that is introduced by the lasso's penalty term, but this bias is often warranted as it significantly attenuates the variance associated with having too saturated of a model. Typically, the magnitude of shrinkage induced by the lasso's penalty term is treated as a tuning parameter (\(\lambda\)) and selected on the basis of an information criterion or cross-validation (CV). In this section, we will show how the lasso is expected to fail when applied to feature sets of different sizes, most notably when applied to interactions and main effects, and we will offer a solution via the sparsity-ranked lasso.
The ordinary lasso solution can be obtained by standardizing all of the columns of \(X\) and minimizing the following expression with respect to \(\boldsymbol \beta\): \[
\left|\left|\boldsymbol y - X\boldsymbol \beta\right|\right|^2 + \lambda \sum_{j=1}^p | \beta_j| \] It is well-known that this solution has a Bayesian interpretation. If each \(\beta_j \sim \text{Laplace}(0, \lambda)\), then the mode of the joint posterior distribution represents the lasso solution (Tibshirani, 1996) for a given \(\lambda\) value. As the sample size increases, the likelihood becomes more concentrated and contributes more information to the posterior, eventually pulling the mode off of zero. As \(\lambda\) is increased, the balance of information shifts toward the Laplace prior, and the mode gets pulled (potentially all the way) to zero (a visualization of this is available at \url{https://ph-shiny.iowa.uiowa.edu/rpterson/shiny_vis1/}). These zero-centered independent Laplace priors form the following joint prior density: \[ \pi (\boldsymbol \beta) =
\prod_{j=1}^p \frac {\lambda}{2 } e^{-\lambda |\beta_j|} \]
As a brief aside, we turn to the concept of Fisher information. Fisher information is invoked in likelihood theory to describe the behavior of maximum likelihood estimators, but the concept can quantify the structural characteristics of any joint density. \footnote{Jeffreys, for instance, derived his famous noninformative prior based on the concept of the Fisher information of a prior density.}
For \(W \sim f(w | \lambda)\) where \(\lambda \in \Lambda\) is scalar and \(\lambda \rightarrow \log f(w | \lambda)\) is twice differentiable in \(\lambda\) for every \(w\), the model Fisher information at any \(\lambda\) is defined to be
\(I(\lambda) = E_{W|\lambda} \left[-\frac {\partial^2}{\partial \lambda^2} \log f(W|\lambda)\right]\).
Now, consider partitioning the covariate space \(X\) into \(K\) groups, such that \(X = \left[A_1, A_2, ..., A_k, ..., A_K\right]\). If we let \(p_k\) refer to the column dimension of \(A_k\) \(\forall \ k\), and let \(\beta_j^k\) refer to a particular \(\beta_j\) in covariate group \(k\), then the prior for \(\boldsymbol \beta\) undergoes a purely cosmetic change and becomes \[
\pi (\boldsymbol \beta | \lambda) \propto
\prod_{k=1}^K \prod_{j = 1}^{p_k} \lambda e^{-\lambda |\beta_j^k|} \]
\noindent If we think of all of the \(\beta_j^k\) as random variables (which they are \emph{a priori}) and take \(\lambda\) to be a parameter, it becomes straightforward to find the Fisher information in this prior density. \[ \begin{aligned}
\frac {\partial^2}{\partial \lambda^2} \log \pi (\boldsymbol \beta | \lambda) = - \frac{1}{\lambda^2} \sum_{k=1}^K p_k \\
I(\lambda) = E_{X|\lambda} \left[
-\frac {\partial^2}{\partial \lambda^2} \log \pi (\boldsymbol \beta | \lambda) \right]
= \frac{1}{\lambda^2} \sum_{k=1}^K p_k \end{aligned} \]
\noindent This information increases with the dimension of each group's parameter space equally (for any \(\lambda > 0\)). So if \(p_1 = 10\), and \(p_2 = 100\), by default the contribution toward the prior information by the covariate group \(A_1\) is only one tenth that of the covariate group \(A_2\), for any \(\lambda > 0\). If \(A_1\) refers to the main effects, and \(A_2\) refers to their pairwise interactions, there can be a substantial degree of \emph{a priori} informational asymmetry between the interactions and the main effects. This asymmetry leads popular feature selection tools such as the ``all-pairwise lasso'' (APL) to select too many selections among the candidate interaction effects while shrinking the main effects excessively.
In many (perhaps most) situations, the preceding weighting scheme may not be desired. We can slightly modify the prior distribution by replacing \(\lambda\) with \(\lambda_k = \lambda \sqrt{p_k}\). Now, unlike before when the distributions were independent and identical for all \(k\), each \(\beta_j^k\) is only independent and identically distributed within its own covariate group \(k\). The Fisher information contained in the prior for covariates in group \(k\) after this modification is \[ I(\lambda_k) = \frac{p_k}{\lambda^2_k} = \frac{p_k}{\lambda^2 p_k} = \frac{1}{\lambda^2} \ \forall \ k \]
\noindent In words, by scaling each group's penalty by the square-root of its dimension, we have ensured that the prior information is the same across groups; no group has an \emph{a priori} informational advantage. Therefore, we can achieve a ``ranking'' in the sparsity that treats covariate \emph{groups} equally as opposed to the covariates themselves. If we add another tuning parameter, \(\gamma\), in the definition for \(\lambda_k\) such that \(\lambda_k = \lambda p_k^\gamma\), the resulting approach can be seen as a generalization the ordinary lasso, where \[ I(\lambda_k) = \frac {1}{\lambda^2} p_k^{(1-2\gamma)} \]
\noindent If \(\gamma = 0\), this is identical to the ordinary lasso. If \(\gamma = 0.5\), then \(w_k =\sqrt {p_k}\) and each covariate group contributes the same amount of prior information (which is a good default setting for many circumstances, especially for the context of interactions). As \(\gamma\) increases, the penalties for larger groups of covariates increase quickly (as the information contribution decreases quickly with group size). In less-common cases where the grouping is not well-defined, we suggest tuning \(\gamma\) to a value between zero (the ordinary lasso) and 0.5 (equal group-level prior information). Under this primary SRL formulation, we do not suggest choosing \(\gamma > 0.5\) unless there is a strong reason to believe that the quality of information decreases substantially with group size.
We consider a variant of the penalty weighting scheme where the information is expected to decrease as the \emph{group index} increases. Specifically, instead of weighting the penalties by \(p_k^{1-2\gamma}\), we use \(({\sum_{i=1}^k p_i})^{1-2\gamma}\). This group index formulation yields a group-level information contribution of \[ I(\lambda_k) = \frac {1}{\lambda^2}\left(\frac{p_k}{\sum_{i=1}^k p_i}\right)^{(1-2\gamma)} \]
\noindent In words, the information contribution is highest for \(A_1\), and decreases cumulatively as more groups are added. If \(A_1\) represents main effects, and \(A_2\) represents squared polynomial terms of those main effects, this cumulative group index penalty ensures that the polynomial terms are penalized more heavily than the main effects (despite having the same group size). We call this formulation the cumulative SRL, for which the value of \(\gamma\) determines the extent to which penalty increases with the group index (e.g.~polynomial order). We suggest using cross-validation to tune \(\gamma\) for the cumulative SRL.
With these changes, the objective function resembles closely that for the adaptive lasso (Zou, 2006), minimizing the following with respect to \(\boldsymbol \beta\): \[
\left|\left|\boldsymbol y - X\boldsymbol \beta\right|\right|^2 + \lambda \sum_{k=1}^{K} \sum_{j=1}^{p_k} w_k| \beta_j^k| \]
\noindent Unlike the adaptive lasso, \(w_k\) only depends on the group dimensions and potentially \(\gamma\) and the group index. Specifically, \(w_k = p_k^{(1-2\gamma)}\) for the original SRL (\(w_k = \sqrt{p_k}\) assuming group parity in prior information), and \(w_k = \left(\frac{p_k}{\sum_{i=1}^k p_i}\right)^{(1-2\gamma)}\) for the cumulative SRL. As has been shown in other work (e.g. Wang and Wang, 2014), this objective function can be slightly modified to handle non-normal outcomes (binary, Poisson, survival, etc.) by substituting the negative log-likelihood for the least squares term, minimizing
\(-l(\boldsymbol \beta) + \lambda \sum_{k=1}^{K} \sum_{j=1}^{p_k} w_k| \beta_j^k|\) with respect to \(\boldsymbol \beta\); such a substitution was employed for this paper's application. We suggest using path-wise coordinate descent to optimize this objective function with respect to \(\boldsymbol \beta\); this can be implemented with either the \texttt{sparseR} package or \texttt{ncvreg} (Breheny and Huang, 2011).
To summarize, while the ordinary lasso presumes throughout its path that the sparsity levels are equal among covariate groups, the SRL can enforce a ranking in the expected sparsity levels such that the amount of contributed prior information is controlled across covariate groups. We also introduced a ``cumulative'' variant of the SRL, which is particularly useful for selecting and estimating polynomial effects. We illustrated how either the primary SRL or its cumulative variant can be tuned with \(\gamma\). A sensible choice for the SRL is to utilize \(\gamma=0.5\), which sets the prior information to be equal across all covariate groups; we suggest fixing \(\gamma=0.5\) when utilizing the SRL specifically for interactions or for otherwise well-defined covariate groups (or both, as is the case in this paper's application). The cumulative SRL performs best when \(\gamma\) is tuned among several possible values (e.g. \(\gamma \in \{0, .5, 1, 2, 4, 8, ...\}\), suggesting an increased amount of penalization for higher orders of polynomials), which can be done using an information criterion or CV. We explore the performance of the SRL in the forthcoming simulation studies and application, after a brief discussion of similar ideas and techniques in the literature.
\hypertarget{related-methods}{ \subsection{Related Methods}\label{related-methods}}
\hypertarget{existing-methods-which-penalize-based-on-group}{ \subsubsection{Existing Methods which Penalize Based on Group}\label{existing-methods-which-penalize-based-on-group}}
Several methods exist that are close in spirit to a general ranked sparsity framework, including the Integrative Lasso with Penalty Factors (IPF-lasso) (Boulesteix et al., 2017) and the priority lasso (Klau et al., 2018). In both methods, each group of covariates has its own estimated penalty. The IPF-lasso creates a new tuning parameter for each covariate group \(k\), estimating \(\lambda_k\) using a grid search and cross-validation. While technically possible for any number of groups, the IPF-lasso can become computationally difficult when multiple groups are considered (though notably, an adaptive extension of the IPF-lasso which mitigates this issue has been proposed and implemented in the \texttt{ipflasso} R package). Similarly, the priority lasso incorporates a priority ordering of feature groups and fits sequential lasso models on these groups, using residuals from each model as a new outcome to predict using next most important feature set. The priority lasso is feasible for multiple groups; however in order to avoid over-optimism, Klau et al. (2018) recommend a cross-validated offset schema which can be computationally intensive and difficult to implement for multiple groups.
Another related lasso extension meriting discussion is the sparse group lasso (SGL). The solution to the SGL is found by minimizing the following with respect to \(\boldsymbol \beta\): \[
\left|\left|\boldsymbol y - X\boldsymbol \beta\right|\right|^2 +
\alpha \lambda \sum_{k=1}^K \sum_{j=1}^{p_k} |\beta_j^k| +
(1-\alpha) \lambda \sum_{k=1}^K \sqrt{p_k} ||\boldsymbol \beta^k || \]
\noindent SGL bears a resemblance to the SRL, noticing in particular the factor multiple of the group-level penalty, \(\sqrt{p_k}\). However, despite the similar penalty scaling, the SGL will yield quite different results to the SRL in practice. While SGL shrinks the magnitude of the entire vector \(\boldsymbol \beta^k\) within each group, SRL penalizes each coefficient in some sense independently from the others in its group. For example, if the magnitude of the first coefficient in group one is large, i.e. \(\hat \beta_1^1 >> 0\), the SRL would not induce any effect on the magnitude of \(\hat \beta_2^1\). The SGL still penalizes each variable separately in its first penalty, but its second penalty is on the group-level magnitude. In our hypothetical example, a large \(\hat \beta_1^1\) coefficient would thus relax the penalty on \(\hat \beta_2^1\) to an extent. See Figure 1 in Friedman, Hastie and Tibshirani (2010) for a good visualization of this principle. A more detailed comparison of the performance of the SGL to the SRL is left for future work, but one benefit of the SRL that is already evident is its lack of the need for an additional tuning parameter, as \(\gamma = 0.5\) assumes equal prior information across groups, and is thus a defensible choice for many circumstances.
\hypertarget{existing-methods-for-interaction-selection}{ \subsubsection{Existing Methods for Interaction Selection}\label{existing-methods-for-interaction-selection}}
Several methods have been proposed for selecting and estimating interactions under the weak and/or strong hierarchy ``constraint.'' The hierNet approach (Bien, Taylor and Tibshirani, 2013) is well-suited for low-dimensional problems due to its computational complexity. A similar regularization-based method, glinternet (Lim and Hastie, 2015), has been shown to be as effective as hierNet in selecting interactions, but able to execute the fitting and selection 10-10000 times faster. The ``strong heredity interaction model'' (SHIM) approach is similar to the hierNet approach; it extends the lasso to select interaction terms while under a strong hierarchy constraint. SHIM also adds an adaptive lasso element to achieve the oracle property (Choi, Li and Zhu, 2010), and uses an IPF-lasso-type approach of tuning the penalty for the interactions separately from the main effects. SHIM thus has an additional tuning parameter to cross-validate over. Yet another approach is called ``regularization under marginality principle'' (RAMP) (Hao, Feng and Zhang, 2018), which is a two-stage regularization approach that is useful for settings where the storage of the interaction model matrix is an issue. By having a first-stage screening via regularization on the main effects, RAMP substantially cuts down on the size of the model matrix in its second stage, only considering candidate interactions that made it past the first selection stage. All of these methods constrain the solution path to weakly or strongly hierarchical models. Another set of 12 alternative methods for determining treatment-biomarker interaction screening via various types of regularization and dimension reduction are described and empirically evaluated in Ternès et al. (2017).
It is important to note that many of these methodological works on interaction selection involve a comparison to the APL, which consistently selects too many interactions in these comparisons. This issue gets compounded when the true generating model has very few ``active'' interactions, which is an ongoing limitation of interaction feature selection for some of these methods (Lim and Hastie, 2015). The SRL, we will show, does not suffer this limitation. Further, one consideration that is often mentioned only briefly in these related works is whether or not we \emph{should} restrict all candidate models to be hierarchical. The usual presumption is that it makes the most sense for all candidate models to be hierarchical. However, Chipman (1996) provides a compelling paradigm for model hierarchy in a Bayesian context, and argues why the strict imposition of hierarchical structures may not always be defensible. If we think of interactions as children of their ``parent'' main effects, we would guess that a child is certainly \emph{most likely} to be in a model if its parents are both in the model. It is comparably less probable that a child is in a model if one of its parents is not. Is it absolutely impossible (with probability zero) for a child to be in a model without either of its parents?
There are numerous occasions where a generating model is not, in fact, hierarchical. Chipman gives the example of the atmospheric sciences, where relations of the form \(Y = A\exp(BC)\) are common, which is a non-hierarchical model on the log scale. We can also point to models for lung cancer, where ``pack-years,'' the interaction between the number of years spent smoking and the reported number of cigarette packs smoked per day, is an acknowledged risk factor on its own. In fact, a non-hierarchical model of this type is plausible in any setting where level of exposure and time of exposure are both captured somewhere in the candidate covariate space. Therefore, we argue that in lieu of hierarchy constraints, a better general rule would be to enforce hierarchy \emph{preference}. This is considerably different than a constraint. We have shown that the SRL enforces higher penalties for interactions than for the main effects (when \(p > 2\)), which naturally enables hierarchy preference (but does not force hierarchy).
Finally, it is feasible that the IPF-lasso could be used to select from all possible interactions by defining main effects and interaction effects as two separate blocks, an approach which would enforce neither a hierarchy preference nor a constraint. Such an approach, while defensible, may suffer from imprecision (in ensuring that interactions and main effects contribute proportionally to the prior information). Boulesteix et al. (2017) advise to investigate penalty factors within each group as \((1, 2^\gamma)\) for a sequence of positive and negative integers \(\gamma\) (in the two-group case). For interactions, if candidate \(2^\gamma\) values are not near \(\sqrt{\binom{p}{2}}\), we would expect asymmetry in the prior information. More generally, similar imprecision is likely to occur when covariate groups vary substantially in size. For example, in one application, Boulesteix et al. (2017) investigate combining clinical data (11 features) with microarray gene expression measurements (22,283 features) to predict the survival of patients with breast cancer. Their CV procedure estimated the optimal penalty factor for the genetic expression data to be \(2^5=32\), thereby penalizing the expressions much more than the clinical data (in fact, there were no selected gene expressions). With the SRL, prior to CV, we know that for the gene-expression measurements to contribute the same amount of prior information as the clinical features, this penalty factor should be \(\sqrt{22,283 / 11} \approx 45\). Therefore, despite being optimized via CV, 32 is likely too low for the genetic features; a penalty factor of 45 would still select no expression features, in that sense the model would be the same. However, with a penalty factor of only 32, the coefficients on the clinical features may be shrunk more than necessary, thereby decreasing the power to detect clinical effects as well as (perhaps) the predictive accuracy of the final model.
\hypertarget{the-sparser-package}{ \subsection{\texorpdfstring{The \texttt{sparseR} Package}{The sparseR Package}}\label{the-sparser-package}}
We have developed an R package, \texttt{sparseR}, which works in concert with \texttt{ncvreg} (Breheny and Huang, 2011) to implement and facilitate the ranked sparsity methods discussed in this paper. By building upon the \texttt{recipes} package (Kuhn and Wickham, 2019), \texttt{sparseR} also provides a useful means of preprocessing data sets before model fitting, which can facilitate the use of a mix of factors, binary variables, and continuous variables as covariates. The package also contains an information-criterion based metric that we call RBIC (Peterson, 2019), which is paired with a forward step-wise selection function that can select from all possible interactions and polynomials under strong, weak, or non-hierarchy using the ranked-sparsity framework. The \texttt{sparseR} package and a detailed tutorial will soon be made available on the Comprehensive R Archive Network (CRAN). A development version is available on GitHub at \url{https://github.com/petersonR/sparseR}.
\hypertarget{simulations}{ \section{Simulations}\label{simulations}}
\hypertarget{polynomial-simulation-study}{ \subsection{Polynomial Simulation Study}\label{polynomial-simulation-study}}
Consider a simple simulated example, where we have 100 observations arising from a true \(f(x) = 10*(x-.5)^2\) measured with some residual noise, \(\varepsilon_i \overset {iid} \sim N(0, 0.9^2)\), and \(x_i \overset {iid} \sim \text {unif}(0,1)\). This relationship is shown by the solid black line in Figure \ref{fig:Fig_01}\footnote{The R language and environment version 4.0.2 is used for all figures in this work.}. It is well-known that the addition of extraneous polynomial terms in a regression model hurts the model's predictive performance, especially at the bounds of the covariate space. This can be seen in the top three plots of Figure \ref{fig:Fig_01} -- adding higher order terms increases the ``wiggliness'' of the fits (represented by the grey lines).
With only one covariate, we could easily fit, say, \(m\) models with increasing orders of polynomials up to \(m\) and select the best order using an information criterion. However, in higher dimensional settings with \(p\) covariates, this approach is not practical since the number of possible models is \(2^{pm}\). In such settings, one might think to use the lasso to select the optimal order -- this method is explored in the middle 3 plots of Figure \ref{fig:Fig_01}. Evidently, the bias incurred by the L1 penalty reduces some of the variability (the ``wiggliness'') in the relationship, while at the same time contaminating the shape of relationship (note that the fitted lines are bent down towards the origin).
The cumulative SRL can be successfully applied for such polynomial models; if we set \(w_j = (d_j)^\gamma\), where \(d_j\) refers to the degree of covariate \(j\), this approach is equivalent to the cumulative group-index SRL in the one-covariate case. The resulting fits, where \(\gamma \in \{0.5, 1, 2\}\) is selected by BIC\footnote{We exclude $\gamma=0$ in this application of the cumulative SRL to showcase how the method differs from the ordinary lasso applied to the polynomials, which is equivalent to setting $\gamma=0$.}, are shown by the grey lines in the bottom three plots of Figure \ref{fig:Fig_01}. We observe that the fits are both reducing the wiggliness from the extraneous terms while at the same time inducing less ``bending'' towards the origin.
The plots in Figure \ref{fig:Fig_01} only show 50 fits each, but repeating this process 10,000 times, we can compare the root-mean-squared error (RMSE) of estimation resulting from each method across the domain of \(x\)\footnote{The RMSE of estimation is based on the sum of the squared deviations between the true mean values and the corresponding estimates under the fitted model and is computed for 50 evenly-spaced points along the domain of \textit{x}.}. In Figure \ref{fig:Fig_02}, we show the increase in the RMSE for each method relative to a baseline ``oracle'' model (i.e.~an ordinary least-squares model that only includes the ``true'' \(x\) and \(x^2\)). We find that while there is no replacement for an ``oracle'' model, the next best models are those which utilize the SRL method. Interestingly, there appears to be very little in terms of predictive difference between the SRL applied up to the 4th order and SRL applied up to the 6th order; this implies that we could likely increase the maximum order and still not observe a substantive impact on the predictive performance. On the other hand, if the lasso is used, there is a marked decrease in predictive performance between the 4th order model and the 6th order model; it appears as though these models (as well as the OLS models) perform increasingly poorly as the number of extraneous polynomials increases.
\begin{figure}
\caption{A simple simulation where 50 samples of size 100 are generated for $x$ and a response variable $y$ with the relationship $y = f(x) + N(0, .9^2)$. The black line represents the true $f$, and the grey lines represent 50 fits to the different samples. Models in the top three plots are fit using ordinary least squares (OLS); in the middle three plots, models are fit with the lasso; and on the bottom three plots, models are fit using the sparsity-ranked lasso (SRL). SRL and lasso models are tuned using BIC. The covariates included are the polynomials of $x$ up to $x^2$ (left) up to $x^4$ (center), and up to $x^6$ (right).}
\label{fig:Fig_01}
\end{figure}
\begin{figure}
\caption{The expected increase in the root-mean-squared error (RMSE) for the ordinary least squares (OLS), lasso, and sparsity-ranked lasso (SRL) models relative to a baseline ``oracle'' OLS model that only includes the ``true'' variables ($x$ and $x^2$).}
\label{fig:Fig_02}
\end{figure}
We conducted a separate simulation study to further establish the performance of the cumulative SRL for more general functional forms. Between kernel and spline methods, many options exist for estimating a smooth relationship in low-dimensional settings. We investigate four possible generating models: polynomials of orders 10, 2 (quadratic), 1 (linear), and 0 (null). In each setting, we generate data (\(n=100\)) according to the following model: \[ \begin{aligned} &y_i = \alpha + \beta_1 x_i + \beta_2 x_i^2 + ... + \beta_{10}x_i^{10} + \varepsilon_i \\ &\text{where } \varepsilon_i \overset{iid}{\sim} N(0,1) \text{ and } x_i \overset{iid}{\sim} \text{unif}(0,1) \end{aligned} \]
In order to compare many possible functional forms (for the high-order and 2-order settings), the \(\beta_j\) parameters were randomly generated. For the high-order setting, we drew \(\theta_1, \theta_2, ..., \theta_{10} \sim N(0,1)\), then scaled them so their magnitude sums to 10;
\(\beta_{j} = 10*\theta_j / \sum |\theta_i|\). The same technique was used in the quadratic generating model, except only for \(j \in \{1,2\}\); all other parameters were set to 0. In the linear case, \(\beta_1=10\), and in the null setting, \(\beta_j = 0 \ \forall j\). The sole covariate \(x\) follows a standard uniform distribution within each simulation.
For model fitting, we utilize the cumulative SRL method with all terms up to the 10th order. We compare this model fit with the LOESS smoother (\texttt{loess()} in the \texttt{stats} package (R Core Team, 2020)) and with a smoothing spline (the \texttt{gam()} and \texttt{s()} functions from the \texttt{mgcv} package (Wood, 2011)). The default settings are used for these functions. The cumulative SRL is tuned using repeated (\(r=5\)) 10-fold cross-validation with \(\gamma \in \{0, 0.5, 1\}\)\footnote{The $\gamma$ options of 0, 0.5, and 1 represent a minimal set of possible ranked sparsity settings for additional penalization for higher-order polynomials from none ($\gamma=0$) to strong ($\gamma=1$).}. The simulations are repeated 1,500 times. Models are evaluated on the basis of the RMSE of estimation on \(n=10,000\) new randomly sampled observations, presented in Figure \ref{fig:Fig_03}. We find that the cumulative SRL achives similar performance to its LOESS and spline alternatives. Importantly, in contrast to alternatives, the cumulative SRL performs very well for the lower order and null models, predicting new values almost as well as the oracle model. This relative improvement is the result of overfitting of the alternative methods. SRL's superlative performance in the null setting matters to a great extent in the high-dimensional setting where we expect many null relationships. For the \(10^{th}\)-order generating model, the poor performance of the ``full'' model, despite it being technically correctly specified, is a mark of the fact that there is high correlation among the polynomials of \(x\), which inflates the variance of the estimated regression coefficients.
\begin{figure}
\caption{Performance of smoothing methods in describing a truly polynomial (or null) relationship between a single covariate and response. The RMSE within each simulation is plotted along the y-axis. SRL and lasso models are tuned using BIC.}
\label{fig:Fig_03}
\end{figure}
These simulation studies suggest that, at least for functional forms well-represented by polynomials, the cumulative SRL will do well to fit the relationship while also sifting through many null relationships. For other functional forms not well-represented by polynomials, or when the covariate has a high amount of skew, the cumulative SRL will not perform as well as spline/kernel alternatives. However, for the case of covariate skew, a normalizing transformation on the covariate prior to expanding the polynomial can mitigate this effect. We have developed software in a separate work that can adequately and robustly perform these normalizations (Peterson and Cavanaugh, 2020; Peterson, 2021).
\hypertarget{interactions-simulation-study}{ \subsection{Interactions Simulation Study}\label{interactions-simulation-study}}
\hypertarget{simulation-setup}{ \subsubsection{Simulation Setup}\label{simulation-setup}}
While we have shown how the SRL can compete with other smoothing techniques in the one-dimensional setting, the main benefits of the SRL methodology are present in the medium-to-high dimensional setting where model selection must take place. We set up a more extensive simulation in the context of interactions, comparing the SRL's performance to that of the glinternet method, the all-pairwise lasso (APL), and the lasso with only the main effects included (LS0).
Let \(\mathbb X = \left[X, X^{\odot 2}\right]\) refer to the column combination of the main covariates (an \(n \times p\) matrix), followed by their element-wise product values (\(n \times \binom{p}{2}\)). We wish to fit the linear model \(\boldsymbol y = \mathbb X \boldsymbol \beta + \boldsymbol \varepsilon\), where we partition \(\boldsymbol {\beta}^T = \left[\boldsymbol {\beta}_1^T, \boldsymbol {\beta}_2^T\right]\) to correspond with our notation for \(\mathbb X\). In the usual case, where interactions are not considered, it is assumed that \(\boldsymbol {\beta}_1\) is the only parameter vector with nonzero components. One would expect that this assumption helps in situations where the true generating model is, in fact, linear in the main covariates with no active interactions. However, what if there are nonzero components in the other parameter vector? We will investigate.
In the simulations to follow, we take \(n=300\), and \(p=20\), and we generate each element in \(X\) as independent uniform(0,1) random variables. The reason for independence in these predictors is to allow simple interpretations of the selection results (with correlated predictors, what constitutes a false discovery or a false negative is less well-defined). We investigate predictive performance in the setting of correlated features in the supplemental work, and mention the take-aways in this paper's discussion. We set the number of nonzero main effects (\(s\)) in \(\boldsymbol \beta_1\) as \(s = 5\). In order to generate our \(\boldsymbol \beta\) coefficients in such a way that a large set of possible relationships are considered, we use scaled normal random variables as our ``active'' (i.e.~nonzero) parameters in \(\boldsymbol \beta\). Specifically, we consider 11 generative settings of interest corresponding to the number of active interactions \(b\), and the algorithm to generate the nonzero (active) coefficients is comprised of the following steps:
\singlespacing
For \(b \in \{0,1,2,...,10\}\):
\begin{itemize} \item
Draw \(\theta_1, \theta_2, ..., \theta_s \sim N(0,1)\) \item
Compute scaled main effects
\(\beta_{1j} = 10*\theta_j / \sum |\theta_i|\) \item
Generate \(b\) standard normal variables \(\{\phi_1, ..., \phi_b\}\) \item
Select the index of active interactions \(j\) according the rules
outlined in the following paragraph \item
Set \(\beta_{2j} = 10*\sqrt {\frac{12}{7}}\phi_j/\sum |\phi_i|\)
\footnote{Scaling by $\sqrt {\frac{12}{7}}$ accounts for the difference in variability between uniform random features and their interactions.} \end{itemize}
\doublespacing
The generating models were not necessarily strongly hierarchical. In particular, each simulation was configured according to Chipman's paradigm; strong hierarchy was most probable, weak hierarchy less so, and non-hierarchy least so. Given \(s = 5\) and \(p=20\), there are \(\binom{s}{2}=10\) candidate interactions that would yield a strongly hierarchical model, \(s (p - s)=75\) that would yield a weakly hierarchical model, and \(\binom{p-s}{2} = 105\) that would yield an non-hierarchical model. Within a simulation, each active interaction effect (if there were any) was drawn at random from these bins of strong, weak, or non-hierarchical candidate effects with probabilities 0.7, 0.2, 0.1, respectively.
In order to fit these models, we consider four modeling frameworks: LS0 (lasso on original terms only), APL (lasso with original and all pairwise interaction terms), SRL (sparsity-ranked lasso with original and all pairwise interaction terms), and GLN (glinternet model). For each framework, the optimal \(\lambda\) is selected with 10-fold CV, and then that tuned model is used to predict 10,000 new randomly sampled observations. The \(\gamma\) parameter for SRL is fixed to 0.5, corresponding to an equal contribution of prior information from the main effects and the interaction effects. This process (including the new generation of \(\beta_{ij}\) terms) is repeated 1,000 times in order to check the models' predictive and selective performance. For the former, we use the RMSE of prediction on newly generated data to compare the predictive accuracy of the final models\footnote{The RMSE of prediction is based on the sum of the squared deviations between each new observation and the corresponding predicted value under the fitted model.}. For the latter, we use the false discovery rate (FDR), the mean number of Type I errors, and the mean number of Type II errors, examining these quantities both collectively and separately for interactions and main effects. In particular, these selection metrics for interaction terms loosely measure the ``transparency'' and ``interpretability'' of the various models; models with many false discoveries in the interaction effects are needlessly opaque.
\begin{figure}
\caption{Predictive performance of various interaction fitting methods relative to SRL. LS0 refers to the lasso fit using only the original terms, APL refers to the lasso fit using the original terms and all pairwise interactions, SRL refers to the sparsity-ranked lasso fit with $\gamma = 0.5$, and GLN refers to the glinternet model. For all models, $\lambda$ was tuned with 10-fold cross-validation. The ``\textasciicircum'' notation refers to the values of the LS0 model that were too large to be clearly plotted next to the other curves.}
\label{fig:Fig_04}
\end{figure}
\hypertarget{simulation-results}{ \subsubsection{Simulation Results}\label{simulation-results}}
The predictive performance of the models across all simulations is shown in Figure \ref{fig:Fig_04}. Although the LS0 model demonstrates a very slight gain in predictive performance if the true model has no interactions, it also exhibits a marked loss in performance when any active interactions are present. The APL model performs comparably better than the LS0 model when any interactions are present, but performs much worse in the no-interaction case. SRL performs much better than either the APL or GLN in the no-interaction case. SRL and GLN have similar predictive performance to each other when active interactions are present, both performing much better than either the APL or LS0. In the supplement, we show how other correlation structures exhibit similar results; in fact, SRL's relative performance compared to GLN and APL is sometimes even better with higher correlation among features. In situations where the SRL performed worse than other methods (such as the compound symmetry correlation matrix with feature correlation \(\rho = 0.5\)), selecting the optimal \(\gamma \in \{0, 0.5, \infty\}\) using CV still performed comparably to the best alternative.
The plots in Figure \ref{fig:Fig_05} show model selection information for each framework. When the true model has no active interactions, the LS0 and the SRL methods look very similar in terms of FDR and the mean number of Type I/II errors. In this same setting, the GLN and APL models have a much higher FDR and mean number of Type I errors; this is driven by the tendency of these models to select too many false interactions (rendering selected models unnecessarily opaque). For all of the generative settings, the overall FDR for the APL is very high, and it is driven disproportionately by a high FDR in the interaction effects. The GLN method also exhibits this differential expression of the FDR among main and interaction effects, though to a lesser degree. This difference is further seen in the number of Type I errors; the higher FDR in the interaction effects translates to many more Type I errors for GLN and APL than for SRL. The SRL method maintains approximately the same number of Type I errors in the interaction effects and the main effects for \(b\geq4\). This improvement in FDR/Type I error rate exhibited by SRL compared to GLN is balanced out by a slightly higher mean number of Type II errors, a disparity which grows with the number of active interactions. In summary, when interactions are especially sparse, the SRL outperforms every other method in terms of prediction and selection. Otherwise, the SRL and GLN perform comparably to one another in terms of prediction, although the SRL produces more interpretable/transparent models by admitting fewer unnecessary (false) interactions.
\begin{figure}
\caption{Model selection performance of various interaction fitting methods. The top three plots show the mean FDR across simulations, the middle three plots show the mean number of Type I errors, and the bottom three plots show the mean number of Type II errors. The metrics are stratified into main-effects (left), interaction effects (center), and their combined/overall values (right). LS0 refers to the lasso fit using only the original terms, APL refers to the lasso fit using the original terms and all pairwise interactions, SRL refers to the sparsity-ranked lasso fit with $\gamma = 0.5$, and GLN refers to the glinternet model. For all models, $\lambda$ was tuned with 10-fold cross-validation.}
\label{fig:Fig_05}
\end{figure}
\hypertarget{application-gene-environment-interactions}{ \section{Application: Gene-Environment Interactions}\label{application-gene-environment-interactions}}
\hypertarget{background}{ \subsection{Background}\label{background}}
We wish to show how SRL methods can be used in the context of genetic data, specifically for the purpose of detecting important gene-environment interactions. Gene-environment interactions make sense biologically, but unfortunately, they are very difficult to detect in practice. With high-dimensional data, the detection of any meaningful association is sufficiently challenging, yet looking for interactions with high-dimensional data is akin to searching for several needles in tens of thousands of haystacks. We utilize a study that collected data on 442 patients with lung cancer (adenocarcinoma) (Shedden et al., 2008). For each patient, the investigators observed the time of death or censor (the primary outcome), 22,283 gene expression measurements taken from a sample of the lung cancer tumor, and some clinical covariates: sex, race, age, whether the patient received chemotherapy, smoking history, and cancer stage. The main outcome of overall survival is presented in Figure \ref{fig:Fig_06}.
\begin{figure}
\caption{Kaplan-Meier curve for Shedden data; survival time is the primary outcome of the study for which we build regularized Cox regression models to predict.}
\label{fig:Fig_06}
\end{figure}
\hypertarget{methods}{ \subsection{Methods}\label{methods}}
In order to fit models with gene-environment interactions, we take our outcome \((\boldsymbol y, \boldsymbol d)\) as the time until death/censor and the death indicator, respectively. We model the hazard of death using a Cox proportional hazards model with our predictors being comprised of the genetic and clinical covariates described in the previous section denoted by \(X\) and \(Z\), respectively. For our purposes, we are only interested in interactions that may occur between \(X\) and \(Z\) or within \(Z\); we do not look for interactions that occur within \(X\)\footnote{Strictly speaking, the clinical covariates are not all environmental, but we decided to include all of their interactions as candidates out of interest. However, selected interactions should be considered in their proper context and accordingly labelled using appropriate terminology.}. Since the outcome is time-to-event, we substitute the partial likelihood for the Cox regression model into the objective function, taking the place of the least squares term.
We investigate the performance of three candidate modeling frameworks: the lasso using only main effects (LS), the sparsity-ranked lasso (SRL), and the all-pairwise lasso (APL). Within the SRL framework, we set \(\gamma = 0.5\), and investigate three different penalty schema. Since our features consist of both clinical and genetic covariates, we treat these as two separate covariate groups of different sizes in a model we abbreviate as SR0, which does not include any interactions (but still uses the SRL framework). SR1 refers to the SRL approach for both the main and interaction effects with proportional weighting. Finally, SR2 refers to the cumulative SRL, wherein the penalty increases cumulatively for clinical covariates, genetic covariates, and their interactions (in that order).
The first step in the modeling process is to split the data \(\mathbb X = \left[\boldsymbol y, \boldsymbol d, X, Z \right]\) randomly into a training set \(\mathbb X_{\text{train}} \ (n = 342)\) and a test set \(\mathbb X_{\text{test}} \ (n = 100)\). Second, based on \(\mathbb X_{\text{train}}\), we use repeated (\(r=10\)) cross-validation (\(k=10\)) to tune each of the aforementioned models (with respect to \(\lambda\)). At this stage, we also select the optimal modeling structure. Third, we use the optimally tuned model within each modeling structure to predict outcomes on the test set \(\mathbb X_{\text{test}}\), comparing performance between the models and confirming that the optimal structure we selected in the prior step performed the best on \(\mathbb X_{\text{test}}\). Finally, we re-fit and re-tune the optimal modeling structure using the full data \(\mathbb X\) in order to interpret the best final model.
In order to assess predictive efficacy for cross validation, we use the expected extra-sample Cox partial deviance, estimated as described in the \texttt{ncvreg} documentation (Breheny and Huang, 2011). While this measure is difficult to interpret in an absolute sense, it can be effective in assessing predictive accuracy in a relative sense. We also calculate an estimate of the out-of-sample \(R^2\) based on the deviance, and we measure both accuracy measures using the test set as well as the CV process.
As a final purely visual assessment of predictive performance, we categorize individuals from the test data into three categories based on their expected risk score (low-risk, medium-risk, or high-risk). The cut points are set to be the 33rd and 67th percentile of the linear predictions on the test set, which could vary across methods. Then, using the test set, we plot Kaplan-Meier (KM) curves for each method stratified by the test set's predicted risk score categories. More separation among those stratifications on the KM plot means better predictive performance; such delineation indicates that the model is doing a good job of classifying high-, medium-, and low-risk patients in the test set.
\hypertarget{results}{ \subsection{Results}\label{results}}
The estimated extra- and out-of-sample Cox partial deviance and Cox-Snell \(R^2\) by model is shown in Figure \ref{fig:Fig_07} and Table \ref{tab:tab02_1}. We find APL performed quite poorly, which indicates that the consideration of pairwise interactions, without accounting for ranked sparsity, is not a good idea. The LS method performed only slightly better, which indicates that penalizing the genetic and clinical covariates equally may not be advised either. The relatively strong performance of SR0, SR1, and SR2 indicates that the sparsity-ranked lasso achieves a satisfactory middle ground. Since these SRL models all perform similarly, and none of them select any interactions (see Table \ref{tab:tab02_2}), we have little to no evidence that any prominent gene-environment interactions are capable of being discovered.
\begin{figure}
\caption{Maximum Cox-Snell $R^2$ achieved for each model. Box plots consist of the fold-averaged estimate for each of 10 repeats, so the spread of these results are due differences in the fold assignments related to the random number generation seed, and do not represent the variability in the CV estimate itself.}
\label{fig:Fig_07}
\end{figure}
\captionsetup{width=.82\textwidth}
\begin{table}
\caption{\label{tab:tab02_1}Estimated predictive performance (mean deviance and Cox-Snell $R^2$) calculated using extra- and out-of-sample data broken down by modeling framework. Cross-validated values are estimated with 10-folds and 10 repeats.} \centering \begin{tabular}[t]{lrrrr} \toprule \multicolumn{1}{c}{ } & \multicolumn{2}{c}{Cross-Validation} & \multicolumn{2}{c}{Test set} \\ \cmidrule(l{3pt}r{3pt}){2-3} \cmidrule(l{3pt}r{3pt}){4-5}
& Deviance & $R^2$ & Deviance & $R^2$\\ \midrule Lasso with only main effects (LS) & 10.35 & 0.060 & 7.77 & 0.141\\ SRL with only main effects, proportional weights (SR0) & 10.20 & 0.188 & 7.70 & 0.196\\ SRL with interactions, proportional weights (SR1) & 10.18 & 0.205 & 7.72 & 0.184\\ SRL with interactions, cumulative weights (SR2) & 10.22 & 0.177 & 7.69 & 0.204\\ All-pairwise lasso (APL) & 10.40 & 0.013 & 7.86 & 0.054\\ \bottomrule \end{tabular} \end{table}
\captionsetup{width=.8\textwidth}
In Figure \ref{fig:Fig_8}, we show the categorization efficacy of each model using the test data set. SR2 is omitted here because its performance is very similar to SR0. In the plots, we note that the LS and the APL models did a good job classifying high-risk patients, but did not distinguish well between medium- and low-risk patients. The SR0 and SR1 models seem to have done a relatively good job classifying individuals in the test set, which is most likely due to the handling of the clinical covariates (SRL is shrinking the clinical variables relatively less than the LS model).
\begin{figure}
\caption{Risk score classification performance efficacy of each modeling framework using the test data set. More separation among stratifications on the KM plot signifies indicates that the model is doing a good job of classifying high-, medium-, and low-risk patients. LS refers to the lasso on the original covariates, SR0 refers to the SRL with proportional penalties on clinical and genetic covariates, SR1 refers to the SRL with interactions and proportional penalty weights, and APL refers to the all-pairwise lasso.}
\label{fig:Fig_8}
\end{figure}
After re-fitting and re-tuning each model to the entire data set, we examine the number of selections (S) and the sum of the magnitude of the standardized coefficients by covariate group for each optimally tuned model in Table \ref{tab:tab02_2}. Evidently, the LS model found most of its signal from the genetic covariates; 43 of which had nonzero coefficients. Only 3 clinical covariates were selected, and the combined magnitude of the standardized coefficients (\(||\beta||_1\)) was only
\(0.134\). SR0, on the other hand, found the majority of the signal to lie in the six clinical covariates (\(||\beta||_1 = 0.938\)), though it still found a good amount of signal (\(0.857\)) in 42 of the genetic covariates. Neither SR1 nor SR2 selected any gene-environment interactions. SR1 found less signal in the genetic variables than SR0 -- this indicates that the addition of interaction terms necessitated a higher amount of shrinkage in the main effects (particularly the genetic main effects). For SR2 however, since the coefficients are being penalized in a cumulative fashion, the amount of signal is very similar to SR0 when no interactions were considered. APL, although having discovered 5 gene-environment interactions, is clearly not able to find much signal at all; it is shrinking all of the effects considerably. These results taken together indicate that there are no informative gene-environment interactions, and that the clinical variables should be penalized proportionally less than the genetic variables.
\captionsetup{width=.78\textwidth}
\begin{table}
\caption{\label{tab:tab02_2}The number of selections (S) and the sum of the magnitude of the standardized coefficients by covariate group for each (optimally tuned) model. Tuning of $\lambda$ was accomplished with 10-fold cross-validation with 10 repeats, and $\gamma$ was set to 0.5. LS refers to the lasso on the original covariates, SR0 refers to the SRL with proportional penalties on clinical and genetic covariates, SR1 refers to the SRL with interactions and proportional penalty weights, SR2 refers to the SRL with interactions and cumulative penalty weights, and APL refers to the all-pairwise lasso.} \centering \fontsize{10.5}{12.5}\selectfont \begin{tabular}[t]{>{\raggedright\arraybackslash}p{2.4cm}rrrrrrrrrr} \toprule \multicolumn{1}{c}{ } & \multicolumn{2}{c}{LS} & \multicolumn{2}{c}{SR0} & \multicolumn{2}{c}{SR1} & \multicolumn{2}{c}{SR2} & \multicolumn{2}{c}{APL} \\ \cmidrule(l{3pt}r{3pt}){2-3} \cmidrule(l{3pt}r{3pt}){4-5} \cmidrule(l{3pt}r{3pt}){6-7} \cmidrule(l{3pt}r{3pt}){8-9} \cmidrule(l{3pt}r{3pt}){10-11}
& S & $||\beta||_1$ & S & $||\beta||_1$ & S & $||\beta||_1$ & S & $||\beta||_1$ & S & $||\beta||_1$\\ \midrule Clinical & 3 & 0.134 & 6 & 0.938 & 6 & 0.926 & 6 & 0.935 & 1 & 0.013\\ Genetic & 43 & 1.133 & 42 & 0.857 & 31 & 0.583 & 42 & 0.786 & 6 & 0.229\\ Env-Env & 0 & 0.000 & 0 & 0.000 & 0 & 0.000 & 0 & 0.000 & 0 & 0.000\\ Gene-Env & 0 & 0.000 & 0 & 0.000 & 0 & 0.000 & 0 & 0.000 & 5 & 0.041\\ \bottomrule \end{tabular} \end{table}
\captionsetup{width=.8\textwidth}
In Table \ref{tab:tab02_3}, we show how many selected variables were shared for each model selection method. There was very high agreement in the SR models, and in fact perfect agreement between SR0 and SR2 (they selected all of the same variables). SR1 selected only one covariate that was not selected by SR0 or SR2, a feature called ``checkpoint kinase 1,'' although its estimated coefficient was very small.
\captionsetup{width=.382\textwidth}
\begin{table}
\caption{\label{tab:tab02_3}Number of selected coefficients common among each method.} \centering \begin{tabular}[t]{lccccc} \toprule
& LS & SR0 & SR1 & SR2 & APL\\ \midrule LS & 46 & 34 & 27 & 34 & 7\\ SR0 & & 48 & 36 & 48 & 7\\ SR1 & & & 37 & 36 & 7\\ SR2 & & & & 48 & 7\\ APL & & & & & 12\\ \bottomrule \end{tabular} \end{table}
\captionsetup{width=.8\textwidth}
Finally, we will interpret the SR0 model, which was very similar to the SR2 model. In terms of the estimated hazard ratios (HRs), the most protective effect we found was for those in the ``never smoked'' group (HR = 0.74). We found two clinically significant protective gene expressions: FAM117A (HR = 0.89), and CTAGE5 (HR = 0.90). We found harmful effects if subjects were white (HR = 1.33), male (HR = 1.32), or had chemotherapy (HR = 1.94). Additionally for every 10 year increase in age, the hazard increases by a multiplicative factor of 1.44. Interestingly, the clinical coefficients in this model are similar to the estimates from the model with only clinical covariates. Oddly, the identified important gene expressions selected by SR0 were not used as classifiers in the original paper. Note that since this was not a randomized controlled trial, these effects are not indicative of causal relationships; in particular, the high HR on chemotherapy status does not indicate that chemotherapy was harmful.
\hypertarget{discussion}{ \section{Discussion}\label{discussion}}
\hypertarget{strengths-and-weaknesses-of-the-srl}{ \subsection{Strengths and Weaknesses of the SRL}\label{strengths-and-weaknesses-of-the-srl}}
We have shown that the sparsity-ranked lasso performs relatively well for selecting transparent models. Whereas other methods for selecting polynomials and/or interactions tend to select overly opaque models (models with high-order relationships that are difficult to interpret), SRL naturally selects models that have more main effects and fewer ``complicating'' terms. In other words, the SRL limits the tendency to select too many interactions, and controls the number of false discoveries among interactions to be close to the same or less than that in the main effects. Therefore, the SRL is a technique that can be utilized and trusted to select from interactions and polynomials without yielding overly convoluted interpretations.
Since many authors have already contributed to the problem of selecting from all possible interactions, discussion of the SRL compared to these competing methods is warranted. One major benefit to the SRL is that it can be applied to survival outcomes; at the time of writing, all of the competing methods we have mentioned are supported by open-source software packages, but none can handle survival outcomes to our knowledge. Thanks to the versatility of the \texttt{ncvreg} package, the SRL method can be used for binomial, continuous, survival, or Poisson outcomes (Breheny and Huang, 2011). Further, in \texttt{sparseR}, sparsity-ranked versions of non-convex regularization methods such as the Minimax Concave Penalty (MCP) (Zhang, 2010) and the Smoothly Clipped Absolute Deviations (SCAD) penalty (Fan and Li, 2001) are also feasible and implementable. We have shown that these non-convex methods work quite well (empirically) in this paper's supplement (Figures S7, S8). Another benefit of the SRL to consider is the computational speed; glinternet has been shown to be 10-10000 times faster than hierNet, and yet our method is quite a bit faster than glinternet (at least for our simulation settings, see supplemental Figures S9, S10). This speed-up does not seem to change as the sample size increases, and it is especially noticeable when cross-validation is employed to tune the models.
Perhaps most importantly, we have found that SRL works better than glinternet (in terms of prediction accuracy and the false discovery rate) when there are no interactions or when interactions are especially sparse. This strength suggests another important benefit to the SRL producedure; it can be worthwhile, convenient, and straightforward to extend the SRL to examine higher order interactions (and polynomials). As opposed to competing methods, the SRL will not heavily inflate the number of Type I errors in the course of such an investigation, even as number of \(k\)-order interactions increases combinatorically with \(k\).
One large weakness to the SRL is that it requires storage of a potentially large matrix of interactions. However, recent advances in the scalability of regularization algorithms such as the \texttt{biglasso} package (Zeng and Breheny, 2021) are applicable to the SRL as well. Another weakness that the SRL shares with other regularization procedures is that the optimal mechanism of formal inference is unclear. It is possible to extend recent advances in the marginal false discovery rate (mFDR) (Breheny, 2018; Miller and Breheny, 2019) to the SRL framework, and this method is currently included in the \texttt{sparseR} package. Yet whether or not this method is optimal for formal inference and whether the mFDR works well for ranked-sparsity settings remains an area of future research. Finally, one often unaddressed issue with using regularization to search for important interactions is that the model fit is sensitive to the choice of origin among the covariates; in particular, the method is not invariant to changes of location, such as centering, in the covariates. The tutorial for \texttt{sparseR} goes more into detail about what can be done in circumstances where the best origin location is unknown ahead of time. One solution is to use our ranked-sparsity-based information criterion RBIC (Peterson, 2019), which is invariant to location changes in the covariates, to search for and select an optimal model, comparing this fit with the estimates from the regularization procedure.
In the course of our exploration, some of our results indicate that the SRL will not perform as well as competitors in certain situations. This relatively poor performance was seen when using the cumulative SRL for polynomials in settings with highly skewed covariates or when the functional forms cannot be well-represented by polynomials, in which circumstances other smoothers tend to work better. Also, the performance of the SRL for interactions relative to glinternet seems to depend on the hierarchical configuration of the generating model; glinternet can perform slightly better than the SRL when the true model is strongly hierarchical, whereas the SRL method performs better when the true model is weakly or non-hierarchical. Their relative predictive performance depends to an extent on the mix of strong, weak, and non-hierarchical active interactions, and more research is needed to determine exactly how and why this is the case.
Finally, while we have motivated intuitive guidelines for the selection of \(\gamma\), future work should investigate the practicality and utility of optimizing the choice of \(\gamma\) with respect to predictive accuracy. We incorporated \(\gamma\) in the formulation of the SRL for two reasons: (1) to show that the original lasso can be written as a special case of the SRL when \(\gamma=0\), and (2) to explore the apparent benefits to the cumulative SRL for polynomials from additional tuning of its weighting scheme. For this reason, we performed a minor amount of tuning for \(\gamma\) when applying the cumulative SRL in this work (showing that BIC or cross-validation can be used). Outside of the cumulative SRL, e.g.~in our real data analysis, we opted for fixing \(\gamma=0.5\) for simplicity and because we intended each covariate group to contribute the same amount of prior information for SR0 and SR1. For SR2, we acknowledge that further tuning of \(\gamma\) may yield slightly better results, but would then be less comparable to SR0 and SR1.
\hypertarget{other-applications-of-the-srl}{ \subsection{Other Applications of the SRL}\label{other-applications-of-the-srl}}
Though not the primary focus of this paper, the SRL has wide reaching applications outside of interaction and polynomial feature selection. We have developed SRL methods for automated autoregressive (AR) order selection for time series data (Miller et al., 2019), finding the procedure particularly helpful for seasonal time series with uncertainty in the seasonal period. Additionally, we have utilized the SRL in conjunction with adaptive out-of-sample time series regression methods to incorporate past states of a model into the fitting of current or future models via varying penalization weights. Finally, we have extended the SRL into what we call ``ranked cost'' contexts, wherein candidate covariates have quantifiably different costs of data collection. In this setting, if any correlation exists among these features, the SRL can simultaneously optimize for predictive accuracy and the costs of future data collection; it produces the least costly model that can still predict as well as the optimal model. Our exploration of these extensions is still ongoing, but initial results have been promising.
In the context of gene-environment interactions, single nucleotide polymorphism (SNP) data are frequently used rather than gene expression data. SNP data are ordinal/categorical, and tend to have very high amounts of correlation. While further exploration of the performance of the SRL in the context of ordinal/categorical covariate data is warranted, we postulate that ranked sparsity methods would be a fruitful approach. In particular, ranked sparsity can be paired with other existing regularization approaches that work well for highly correlated data such as the elastic net, which is implemented in the \texttt{sparseR} package. In these highly-correlated settings, additional (minor) tuning of the SRL between \(\gamma \in \{0, 0.5, \infty\}\) can improve its relative performance to glinternet and the APL (Figures S2-S6)\footnote{This minor tuning simply involves picking the best performing model between the APL ($\gamma=0$), SRL ($\gamma=0.5$), and LS0 ($\gamma=\infty$); no additional CV is necessary.}. Further, for SNP data, a common simplifying assumption is that each SNP count relates linearly to the outcome (treating these covariates as numeric rather than categorical). The cumulative SRL approach for polynomials can navigate a powerful middle-ground between these qualitative/quantitative extremes. While it treats the SNP count as numeric and \emph{prefers} linearity, when strong evidence of nonlinearity is observable, the cumulative SRL approach will guardedly introduce polynomial terms to the active feature set. Conversely, in the absence of such evidence, the approach will not yield an overabundance of false discoveries.
\hypertarget{conclusion}{ \subsection{Conclusion}\label{conclusion}}
The ranked sparsity framework implements a broader definition of Occam's Razor where a model's simplicity is not purely equated to parsimony; it is also tied to the model's transparency and interpretability. The sparsity-ranked lasso provides an effective and fast approach for selecting from derived variables such as interactions or polynomials. As opposed to other methods of interaction selection, the SRL does not select an unreasonable number of false interaction effects and it does not overly shrink the main effects.
\singlespacing
\hypertarget{references}{ \section*{References}\label{references}} \addcontentsline{toc}{section}{References}
\hypertarget{refs}{} \leavevmode\hypertarget{ref-aic}{} Akaike, H. (1974) A new look at the statistical model identification. \emph{IEEE Transactions on Automatic Control}, \textbf{19}, 716--723.
\leavevmode\hypertarget{ref-bien2013}{} Bien, J., Taylor, J. and Tibshirani, R. (2013) A lasso for hierarchical interactions. \emph{The Annals of Statistics}, \textbf{41}, 1111--1141.
\leavevmode\hypertarget{ref-mbic}{} Bogdan, M., Frommlet, F., Biecek, P., Cheng, R., Ghosh, J.K. and Doerge, R. (2008) Extending the modified Bayesian information criterion (mBIC) to dense markers and multiple interval mapping. \emph{Biometrics}, \textbf{64}, 1162--1169.
\leavevmode\hypertarget{ref-boulesteix2017ipf}{} Boulesteix, A.-L., De Bin, R., Jiang, X. and Fuchs, M. (2017) IPF-lasso: Integrative-penalized regression with penalty factors for prediction based on multi-omics data. \emph{Computational and Mathematical Methods in Medicine}, \textbf{2017}, 7691937.
\leavevmode\hypertarget{ref-mfdr1}{} Breheny, P.J. (2018) Marginal false discovery rates for penalized regression models. \emph{Biostatistics}, \textbf{20}, 299--314.
\leavevmode\hypertarget{ref-breheny2011}{} Breheny, P. and Huang, J. (2011) Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. \emph{Annals of Applied Statistics}, \textbf{5}, 232--253.
\leavevmode\hypertarget{ref-chenchen2008}{} Chen, J. and Chen, Z. (2008) Extended Bayesian information criteria for model selection with large model spaces. \emph{Biometrika}, \textbf{95}, 759--771.
\leavevmode\hypertarget{ref-chipman1996}{} Chipman, H. (1996) Bayesian variable selection with related predictors. \emph{The Canadian Journal of Statistics / La Revue Canadienne de Statistique}, \textbf{24}, 17--36.
\leavevmode\hypertarget{ref-choi2010}{} Choi, N.H., Li, W. and Zhu, J. (2010) Variable selection with the strong heredity constraint and its oracle property. \emph{Journal of the American Statistical Association}, \textbf{105}, 354--364.
\leavevmode\hypertarget{ref-scad}{} Fan, J. and Li, R. (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. \emph{Journal of the American Statistical Association}, \textbf{96}, 1348--1360.
\leavevmode\hypertarget{ref-friedman2010note}{} Friedman, J., Hastie, T. and Tibshirani, R. (2010) A note on the group lasso and a sparse group lasso. preprint: \emph{arXiv:1001.0736}.
\leavevmode\hypertarget{ref-hao2018}{} Hao, N., Feng, Y. and Zhang, H.H. (2018) Model selection for high-dimensional quadratic regression via regularization. \emph{Journal of the American Statistical Association}, \textbf{113}, 615--625.
\leavevmode\hypertarget{ref-aicc}{} Hurvich, C.M. and Tsai, C.-L. (1989) Regression and time series model selection in small samples. \emph{Biometrika}, \textbf{76}, 297--307.
\leavevmode\hypertarget{ref-prioritylasso}{} Klau, S., Jurinovic, V., Hornung, R., Herold, T. and Boulesteix, A.-L. (2018) Priority-lasso: A simple hierarchical approach to the prediction of clinical outcome using multi-omics data. \emph{BMC bioinformatics}, \textbf{19}, 322.
\leavevmode\hypertarget{ref-recipes}{} Kuhn, M. and Wickham, H. (2019) recipes: Preprocessing tools to create design matrices. R package, available at https://CRAN.R-project.org/package=recipes.
\leavevmode\hypertarget{ref-lim2015}{} Lim, M. and Hastie, T. (2015) Learning interactions via hierarchical group-lasso regularization. \emph{Journal of Computational and Graphical Statistics}, \textbf{24}, 627--654.
\leavevmode\hypertarget{ref-mallow}{} Mallows, C.L. (1973) Some comments on Cp. \emph{Technometrics}, \textbf{15}, 661--675.
\leavevmode\hypertarget{ref-mfdr2}{} Miller, R.E. and Breheny, P. (2019) Marginal false discovery rate control for likelihood-based penalized regression models. \emph{Biometrical Journal}, \textbf{61}, 889--901.
\leavevmode\hypertarget{ref-statepi_miller}{} Miller, A.C., Peterson, R.A., Singh, I., Pilewski, S. and Polgreen, P.M. (2019) Improving State-Level Influenza Surveillance by Incorporating Real-Time Smartphone-Connected Thermometer Readings Across Different Geographic Domains. \emph{Open Forum Infectious Diseases}, \textbf{6}.
\leavevmode\hypertarget{ref-dissertation}{} Peterson, R.A. (2019) \emph{Ranked Sparsity: A Regularization Framework for Selecting Features in the Presence of Prior Informational Asymmetry}. PhD thesis, Department of Biostatistics, University of Iowa.
\leavevmode\hypertarget{ref-bestNormalize}{} Peterson, R.A. (2021) Finding Optimal Normalizing Transformations via bestNormalize. \emph{The R Journal}, \textbf{13}, 310--329.
\leavevmode\hypertarget{ref-orqpaper}{} Peterson, R.A. and Cavanaugh, J.E. (2020) Ordered quantile normalization: A semiparametric transformation built for the cross-validation era. \emph{Journal of Applied Statistics}, \textbf{47}, 2312--2327.
\leavevmode\hypertarget{ref-rcore}{} R Core Team. (2020) \emph{R: A Language and Environment for Statistical Computing}. R Foundation for Statistical Computing, Vienna, Austria.
\leavevmode\hypertarget{ref-bic}{} Schwarz, G. (1978) Estimating the dimension of a model. \emph{The Annals of Statistics}, \textbf{6}, 461--464.
\leavevmode\hypertarget{ref-shedden2008}{} Shedden, K., Taylor, J., Enkemann, S., Tsao, M., Yeatman, T., Gerald, W., et al. (2008) Gene expression-based survival prediction in lung adenocarcinoma: A multi-site, blinded validation study. \emph{Nature Medicine}, \textbf{14}, 822--827.
\leavevmode\hypertarget{ref-biomarkertreatmentinteractions}{} Ternès, N., Rotolo, F., Heinze, G. and Michiels, S. (2017) Identification of biomarker-by-treatment interactions in randomized clinical trials with survival outcomes and high-dimensional spaces. \emph{Biometrical Journal}, \textbf{59}, 685--701.
\leavevmode\hypertarget{ref-tibs1996}{} Tibshirani, R. (1996) Regression shrinkage and selection via the lasso. \emph{Journal of the Royal Statistical Society: Series B}, \textbf{58}, 267--288.
\leavevmode\hypertarget{ref-adaptivelsglm}{} Wang, M. and Wang, X. (2014) Adaptive lasso estimators for ultrahigh dimensional generalized linear models. \emph{Statistics \& Probability Letters}, \textbf{89}, 41--50.
\leavevmode\hypertarget{ref-mgcv}{} Wood, S.N. (2011) Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. \emph{Journal of the Royal Statistical Society (B)}, \textbf{73}, 3--36.
\leavevmode\hypertarget{ref-zeng2017}{} Zeng, Y. and Breheny, P. (2021) The biglasso Package: A Memory- and Computation-Efficient Solver for Lasso Model Fitting with Big Data in R. \emph{The R Journal}, \textbf{12}, 6--19.
\leavevmode\hypertarget{ref-mcp}{} Zhang, C.-H. (2010) Nearly unbiased variable selection under minimax concave penalty. \emph{The Annals of Statistics}, \textbf{38}, 894--942.
\leavevmode\hypertarget{ref-adaptivelasso}{} Zou, H. (2006) The adaptive lasso and its oracle properties. \emph{Journal of the American Statistical Association}, \textbf{101}, 1418--1429.
\end{document} | arXiv |
The Hénon equation with a critical exponent under the Neumann boundary condition
Tarik Mohammed Touaoula
Département de Mathématiques, Faculté des Sciences, Université de Tlemcen, Laboratoire d'Analyse Non Linéaire et Mathématiques Appliquées, Tlemcen, BP 119, 13000, Algeria
Received August 2017 Revised April 2018 Published June 2018
Global asymptotic and exponential stability of equilibria for the following class of functional differential equations with distributed delay is investigated
$ x'(t)=-f(x(t))+\int_{0}^{\tau}h(a)g(x(t-a))da.$
We make our analysis by introducing a new approach, combining a Lyapunov functional and monotone semiflow theory. The relevance of our results is illustrated by studying the well-known integro-differential Nicholson's blowflies and Mackey-Glass equations, where some delay independent stability conditions are provided. Furthermore, new results related to exponential stability region of the positive equilibrium for these both models are established.
Keywords: Monotone semi-flow, global asymptotic and exponential stability, fluctuation method.
Mathematics Subject Classification: Primary: 34K20, 37L15; Secondary: 92B05.
Citation: Tarik Mohammed Touaoula. Global stability for a class of functional differential equations (Application to Nicholson's blowflies and Mackey-Glass models). Discrete & Continuous Dynamical Systems - A, 2018, 38 (9) : 4391-4419. doi: 10.3934/dcds.2018191
L. Berezansky, E. Braverman and L. Idels, Nicholson's blowflies differential equations revisited: Main results and open problems, Applied Math. Modelling, 34 (2010), 1405-1417. doi: 10.1016/j.apm.2009.08.027. Google Scholar
L. Berezansky, E. Braverman and L. Idels, Mackey-Glass model of hematopoiesis with non-monotone feedback: Stability, oscillation and control, Appl. Math. Compt., 219 (2013), 6268-6283. doi: 10.1016/j.amc.2012.12.043. Google Scholar
E. Braverman and D. Kinzebulatov, Nicholson's blowflies equation with distributed delay, Can. Appl. Math. Q, 14 (2006), 107-128. Google Scholar
E. Braverman and S. Zhukovskiy, Absolute and delay-dependent stability of equations with a distributed delay, Discrete and Continuous Dynam. Systems, 32 (2012), 2041-2061. doi: 10.3934/dcds.2012.32.2041. Google Scholar
H. A. El-Morshedy, Global attractivity in a population model with nonlinear death rate and distributed delays, J. Math. Anal. Appl., 410 (2014), 642-658. doi: 10.1016/j.jmaa.2013.08.060. Google Scholar
C. Foley and M. C. Mackey, Dynamics hematological disease, J. Math. Biol., 58 (2009), 285-322. doi: 10.1007/s00285-008-0165-3. Google Scholar
K. Gopalsamy, Stability and Oscillation in Delay Differential Equations of Population Dynamics, Kluwer Academic Publishers, Dordrecht, Boston, London, 1992. doi: 10.1007/978-94-015-7920-9. Google Scholar
W. S. C Gurney, S. P. Blythe and R. M. Nisbet, Nicholson's blowflies revisited, Nature, 287 (1980), 17-21. Google Scholar
I. Gyori and S. Trofimchuk, Global attractivity in $x'(t) = -δ x(t)+pf(x(t-h))$, Dynam. Syst. Appl., 8 (1999), 197-210. Google Scholar
J. Hale, Asymptotic Behavior of Dissipative Systems, Math. Surveys Monogr., vol 25, Americal Mathetical Society, Providence, RI, 1988. Google Scholar
J. Hale and S. M. Verduyn Lunel, Introduction to Functional Differential Equations, Applied Mathematical Sciences 99, Springer-Verlag, New York, 1993. doi: 10.1007/978-1-4612-4342-7. Google Scholar
C. Huang, Z. Yang, T. Yi and X. Zou, On the bassin of attraction for a class of delay differential equations with non-monotone bistable nonlinearities, J. Differ. Equations, 256 (2014), 2101-2114. doi: 10.1016/j.jde.2013.12.015. Google Scholar
A. Ivanov and M. Mammadov, Global asymptotic stability in a class of nonlinear differential delay equations, Discrete and Continuous Dynam. Systems, 1 (2011), 727-736. Google Scholar
T. Krisztin and H. O. Walther, Unique periodic orbits for delayed positive feedback and the global attractor, J. Differ. Equations, 13 (2001), 1-57. doi: 10.1023/A:1009091930589. Google Scholar
Y. Kuang, Delay Differential Equations, with Application in Population Dynamics, Academic Press, INC. 1993. Google Scholar
B. Lani-Wayda, Erratic solutions of simple delay equations, Trans. Amer. Math. Soc., 351 (1999), 901-945. doi: 10.1090/S0002-9947-99-02351-X. Google Scholar
E. Liz, M. Pinto, V. Tkachenko and S. Tromichuk, A global stability criterion for a family of delayed population models, Quart. Appl. Math., 63 (2005), 56-70. doi: 10.1090/S0033-569X-05-00951-3. Google Scholar
E. Liz and G. Rost, On the global attractor of delay differential equations with unimodal feedback, Discrete and continuous dynam. systems, 24 (2009), 1215-1224. doi: 10.3934/dcds.2009.24.1215. Google Scholar
E. Liz, V. Tkachenko and S. Tromichuk, A global stability criterion for scalar functional differential equations, SIAM. J. Math. Anal., 35 (2003), 596-622. doi: 10.1137/S0036141001399222. Google Scholar
E. Liz, V. Tkachenko and S. Trofimchuk, Mackey-Glass type delay differential equations near the boundary of absolute stability, J. Math. Anal. Appl., 275 (2002), 747-760. doi: 10.1016/S0022-247X(02)00416-X. Google Scholar
M. C. Mackey, Unified hypothesis for the origin of aplastic anemia and periodic hematopoiesis, Blood, 51 (1978), 941-956. Google Scholar
M. C. Mackey and L. Glass, Oscillations and chaos in physiological control systems, Science, 197 (1977), 287-289. doi: 10.1126/science.267326. Google Scholar
M. C. Mackey and R. Rudnicki, Global stability in a delayed partial differential equation describing cellular replication, J. Math. Biol., 33 (1994), 89-109. doi: 10.1007/BF00160175. Google Scholar
J. Mallet-Paret and R. Nussbaum, Global continuation and asymptotic behavior for periodic solutions of a differential delay equation, Ann. Mat. Pura. Appl., 145 (1986), 33-128. doi: 10.1007/BF01790539. Google Scholar
J. Mallet-Paret and R. Nussbaum, A differential-delay equation arising in optics and physiology, SIAM. J. Math. Anal., 20 (1989), 249-292. doi: 10.1137/0520019. Google Scholar
J. Mallet-Paret and G. R. Sell, The poincar?Bendixson theorem for monotone cyclic feedback systems with delay, J. Differ. Equations, 125 (1996), 441-489. doi: 10.1006/jdeq.1996.0037. Google Scholar
G. Rost and J. Wu, Domain-decomposition method for the global dynamics of delay differential equations with unimodal feedback, Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 463 (2007), 2655-2669. doi: 10.1098/rspa.2007.1890. Google Scholar
H. L. Smith, Monotone Dynamical Systems: An introduction to the theory of Competitive and Cooperative Systems, Math, Surveys Monogr, vol 41, Amer. Math. Soc. 1995. Google Scholar
H. L. Smith, An Introduction to Delay Differential Equations with Applications to the Life Sciences, Springer, 2011. doi: 10.1007/978-1-4419-7646-8. Google Scholar
H. L. Smith and H. R. Thieme, Monotone semiflows in scalar non quasi-monotone functional differential equations, J. Math. Anal. Appl., 150 (1990), 289-306. doi: 10.1016/0022-247X(90)90105-O. Google Scholar
H. L. Smith and H. R. Thieme, Dynamical Systems and Population Persistence, Graduate Studies in Mathematics V. 118, AMS, 2011. Google Scholar
H. R. Thieme, Mathematics in Population Biology, Princeton University Press, Princeton 2003. Google Scholar
D. Xu and X.-Q. Zhao, A nonlocal reaction-diffusion population model with stage structure, Can. Appl. Math. Q., 11 (2003), 303-319. Google Scholar
T. Yi, Y. Chen and J. Wu, Global dynamics of delayed reaction-diffusion equations in unbounded domains, Z. Angew. Math. Phys., 63 (2012), 793-812. doi: 10.1007/s00033-012-0224-x. Google Scholar
T. Yi and X. Zou, Map dynamics versus dynamics of associated delay reaction-diffusion equations with a Newmann condition, Proc. R. Soc. London. Ser. A Math. Phys. Eng. Sci., 466 (2010), 2955-2973. doi: 10.1098/rspa.2009.0650. Google Scholar
T. Yi and X. Zou, Global dynamics of a delay differential equation with spatial non-locality in an unbounded domain, J. Differ. Equations, 251 (2011), 2598-2611. doi: 10.1016/j.jde.2011.04.027. Google Scholar
T. Yi and X. Zou, On Dirichlet Problem for a Class of Delayed Reaction-Diffusion Equations with Spatial Non-locality, J. Dyn. Diff. Equat., 25 (2013), 959-979. doi: 10.1007/s10884-013-9324-3. Google Scholar
Y. Yuan and J. Belair, Stability and Hopf bifurcation analysis for functional differential equation with distributed delay, SIAM, J. Appl. Dyn. Syst., 10 (2011), 551-581. doi: 10.1137/100794493. Google Scholar
Y. Yuan and X. Q. Zhao, Global stability for non monotone delay equations (with application to a model of blood cell production), J. Differ. Equations, 252 (2012), 2189-2209. doi: 10.1016/j.jde.2011.08.026. Google Scholar
Timothy Blass, Rafael De La Llave, Enrico Valdinoci. A comparison principle for a Sobolev gradient semi-flow. Communications on Pure & Applied Analysis, 2011, 10 (1) : 69-91. doi: 10.3934/cpaa.2011.10.69
Je-Chiang Tsai. Global exponential stability of traveling waves in monotone bistable systems. Discrete & Continuous Dynamical Systems - A, 2008, 21 (2) : 601-623. doi: 10.3934/dcds.2008.21.601
Zhili Ge, Gang Qian, Deren Han. Global convergence of an inexact operator splitting method for monotone variational inequalities. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1013-1026. doi: 10.3934/jimo.2011.7.1013
Qiang Tao, Ying Yang. Exponential stability for the compressible nematic liquid crystal flow with large initial data. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1661-1669. doi: 10.3934/cpaa.2016007
Xiao-Qian Jiang, Lun-Chuan Zhang. Stock price fluctuation prediction method based on time series analysis. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 915-927. doi: 10.3934/dcdss.2019061
Christian Lax, Sebastian Walcher. A note on global asymptotic stability of nonautonomous master equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (8) : 2143-2149. doi: 10.3934/dcdsb.2013.18.2143
Lakmi Niwanthi Wadippuli, Ivan Gudoshnikov, Oleg Makarenkov. Global asymptotic stability of nonconvex sweeping processes. Discrete & Continuous Dynamical Systems - B, 2020, 25 (3) : 1129-1139. doi: 10.3934/dcdsb.2019212
Rui Huang, Ming Mei, Kaijun Zhang, Qifeng Zhang. Asymptotic stability of non-monotone traveling waves for time-delayed nonlocal dispersion equations. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1331-1353. doi: 10.3934/dcds.2016.36.1331
Hua Chen, Nian Liu. Asymptotic stability and blow-up of solutions for semi-linear edge-degenerate parabolic equations with singular potentials. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 661-682. doi: 10.3934/dcds.2016.36.661
Rafael O. Ruggiero. Shadowing of geodesics, weak stability of the geodesic flow and global hyperbolic geometry. Discrete & Continuous Dynamical Systems - A, 2006, 14 (2) : 365-383. doi: 10.3934/dcds.2006.14.365
Feng-Yu Wang. Exponential convergence of non-linear monotone SPDEs. Discrete & Continuous Dynamical Systems - A, 2015, 35 (11) : 5239-5253. doi: 10.3934/dcds.2015.35.5239
Fabien Crauste. Global Asymptotic Stability and Hopf Bifurcation for a Blood Cell Production Model. Mathematical Biosciences & Engineering, 2006, 3 (2) : 325-346. doi: 10.3934/mbe.2006.3.325
Shiwang Ma, Xiao-Qiang Zhao. Global asymptotic stability of minimal fronts in monostable lattice equations. Discrete & Continuous Dynamical Systems - A, 2008, 21 (1) : 259-275. doi: 10.3934/dcds.2008.21.259
Anatoli F. Ivanov, Musa A. Mammadov. Global asymptotic stability in a class of nonlinear differential delay equations. Conference Publications, 2011, 2011 (Special) : 727-736. doi: 10.3934/proc.2011.2011.727
Kentarou Fujie. Global asymptotic stability in a chemotaxis-growth model for tumor invasion. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 203-209. doi: 10.3934/dcdss.2020011
Yuming Qin, Lan Huang, Zhiyong Ma. Global existence and exponential stability in $H^4$ for the nonlinear compressible Navier-Stokes equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1991-2012. doi: 10.3934/cpaa.2009.8.1991
Antoine Perasso. Global stability and uniform persistence for an infection load-structured SI model with exponential growth velocity. Communications on Pure & Applied Analysis, 2019, 18 (1) : 15-32. doi: 10.3934/cpaa.2019002
Stefan Meyer, Mathias Wilke. Global well-posedness and exponential stability for Kuznetsov's equation in $L_p$-spaces. Evolution Equations & Control Theory, 2013, 2 (2) : 365-378. doi: 10.3934/eect.2013.2.365
Kai Liu, Zhi Li. Global attracting set, exponential decay and stability in distribution of neutral SPDEs driven by additive $\alpha$-stable processes. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3551-3573. doi: 10.3934/dcdsb.2016110
Manil T. Mohan. On the three dimensional Kelvin-Voigt fluids: global solvability, exponential stability and exact controllability of Galerkin approximations. Evolution Equations & Control Theory, 2019, 0 (0) : 0-0. doi: 10.3934/eect.2020007
PDF downloads (164) | CommonCrawl |
Multinomial logistic regression
In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes.[1] That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc.).
"Multinomial regression" redirects here. For the related Probit procedure, see Multinomial probit.
Part of a series on
Regression analysis
Models
• Linear regression
• Simple regression
• Polynomial regression
• General linear model
• Generalized linear model
• Vector generalized linear model
• Discrete choice
• Binomial regression
• Binary regression
• Logistic regression
• Multinomial logistic regression
• Mixed logit
• Probit
• Multinomial probit
• Ordered logit
• Ordered probit
• Poisson
• Multilevel model
• Fixed effects
• Random effects
• Linear mixed-effects model
• Nonlinear mixed-effects model
• Nonlinear regression
• Nonparametric
• Semiparametric
• Robust
• Quantile
• Isotonic
• Principal components
• Least angle
• Local
• Segmented
• Errors-in-variables
Estimation
• Least squares
• Linear
• Non-linear
• Ordinary
• Weighted
• Generalized
• Generalized estimating equation
• Partial
• Total
• Non-negative
• Ridge regression
• Regularized
• Least absolute deviations
• Iteratively reweighted
• Bayesian
• Bayesian multivariate
• Least-squares spectral analysis
Background
• Regression validation
• Mean and predicted response
• Errors and residuals
• Goodness of fit
• Studentized residual
• Gauss–Markov theorem
• Mathematics portal
Multinomial logistic regression is known by a variety of other names, including polytomous LR,[2][3] multiclass LR, softmax regression, multinomial logit (mlogit), the maximum entropy (MaxEnt) classifier, and the conditional maximum entropy model.[4]
Background
Multinomial logistic regression is used when the dependent variable in question is nominal (equivalently categorical, meaning that it falls into any one of a set of categories that cannot be ordered in any meaningful way) and for which there are more than two categories. Some examples would be:
• Which major will a college student choose, given their grades, stated likes and dislikes, etc.?
• Which blood type does a person have, given the results of various diagnostic tests?
• In a hands-free mobile phone dialing application, which person's name was spoken, given various properties of the speech signal?
• Which candidate will a person vote for, given particular demographic characteristics?
• Which country will a firm locate an office in, given the characteristics of the firm and of the various candidate countries?
These are all statistical classification problems. They all have in common a dependent variable to be predicted that comes from one of a limited set of items that cannot be meaningfully ordered, as well as a set of independent variables (also known as features, explanators, etc.), which are used to predict the dependent variable. Multinomial logistic regression is a particular solution to classification problems that use a linear combination of the observed features and some problem-specific parameters to estimate the probability of each particular value of the dependent variable. The best values of the parameters for a given problem are usually determined from some training data (e.g. some people for whom both the diagnostic test results and blood types are known, or some examples of known words being spoken).
Assumptions
The multinomial logistic model assumes that data are case-specific; that is, each independent variable has a single value for each case. As with other types of regression, there is no need for the independent variables to be statistically independent from each other (unlike, for example, in a naive Bayes classifier); however, collinearity is assumed to be relatively low, as it becomes difficult to differentiate between the impact of several variables if this is not the case.[5]
If the multinomial logit is used to model choices, it relies on the assumption of independence of irrelevant alternatives (IIA), which is not always desirable. This assumption states that the odds of preferring one class over another do not depend on the presence or absence of other "irrelevant" alternatives. For example, the relative probabilities of taking a car or bus to work do not change if a bicycle is added as an additional possibility. This allows the choice of K alternatives to be modeled as a set of K-1 independent binary choices, in which one alternative is chosen as a "pivot" and the other K-1 compared against it, one at a time. The IIA hypothesis is a core hypothesis in rational choice theory; however numerous studies in psychology show that individuals often violate this assumption when making choices. An example of a problem case arises if choices include a car and a blue bus. Suppose the odds ratio between the two is 1 : 1. Now if the option of a red bus is introduced, a person may be indifferent between a red and a blue bus, and hence may exhibit a car : blue bus : red bus odds ratio of 1 : 0.5 : 0.5, thus maintaining a 1 : 1 ratio of car : any bus while adopting a changed car : blue bus ratio of 1 : 0.5. Here the red bus option was not in fact irrelevant, because a red bus was a perfect substitute for a blue bus.
If the multinomial logit is used to model choices, it may in some situations impose too much constraint on the relative preferences between the different alternatives. It is especially important to take into account if the analysis aims to predict how choices would change if one alternative were to disappear (for instance if one political candidate withdraws from a three candidate race). Other models like the nested logit or the multinomial probit may be used in such cases as they allow for violation of the IIA.[6]
Model
See also: Logistic regression
Introduction
There are multiple equivalent ways to describe the mathematical model underlying multinomial logistic regression. This can make it difficult to compare different treatments of the subject in different texts. The article on logistic regression presents a number of equivalent formulations of simple logistic regression, and many of these have analogues in the multinomial logit model.
The idea behind all of them, as in many other statistical classification techniques, is to construct a linear predictor function that constructs a score from a set of weights that are linearly combined with the explanatory variables (features) of a given observation using a dot product:
$\operatorname {score} (\mathbf {X} _{i},k)={\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i},$
where Xi is the vector of explanatory variables describing observation i, βk is a vector of weights (or regression coefficients) corresponding to outcome k, and score(Xi, k) is the score associated with assigning observation i to category k. In discrete choice theory, where observations represent people and outcomes represent choices, the score is considered the utility associated with person i choosing outcome k. The predicted outcome is the one with the highest score.
The difference between the multinomial logit model and numerous other methods, models, algorithms, etc. with the same basic setup (the perceptron algorithm, support vector machines, linear discriminant analysis, etc.) is the procedure for determining (training) the optimal weights/coefficients and the way that the score is interpreted. In particular, in the multinomial logit model, the score can directly be converted to a probability value, indicating the probability of observation i choosing outcome k given the measured characteristics of the observation. This provides a principled way of incorporating the prediction of a particular multinomial logit model into a larger procedure that may involve multiple such predictions, each with a possibility of error. Without such means of combining predictions, errors tend to multiply. For example, imagine a large predictive model that is broken down into a series of submodels where the prediction of a given submodel is used as the input of another submodel, and that prediction is in turn used as the input into a third submodel, etc. If each submodel has 90% accuracy in its predictions, and there are five submodels in series, then the overall model has only 0.95 = 59% accuracy. If each submodel has 80% accuracy, then overall accuracy drops to 0.85 = 33% accuracy. This issue is known as error propagation and is a serious problem in real-world predictive models, which are usually composed of numerous parts. Predicting probabilities of each possible outcome, rather than simply making a single optimal prediction, is one means of alleviating this issue.
Setup
The basic setup is the same as in logistic regression, the only difference being that the dependent variables are categorical rather than binary, i.e. there are K possible outcomes rather than just two. The following description is somewhat shortened; for more details, consult the logistic regression article.
Data points
Specifically, it is assumed that we have a series of N observed data points. Each data point i (ranging from 1 to N) consists of a set of M explanatory variables x1,i ... xM,i (also known as independent variables, predictor variables, features, etc.), and an associated categorical outcome Yi (also known as dependent variable, response variable), which can take on one of K possible values. These possible values represent logically separate categories (e.g. different political parties, blood types, etc.), and are often described mathematically by arbitrarily assigning each a number from 1 to K. The explanatory variables and outcome represent observed properties of the data points, and are often thought of as originating in the observations of N "experiments" — although an "experiment" may consist in nothing more than gathering data. The goal of multinomial logistic regression is to construct a model that explains the relationship between the explanatory variables and the outcome, so that the outcome of a new "experiment" can be correctly predicted for a new data point for which the explanatory variables, but not the outcome, are available. In the process, the model attempts to explain the relative effect of differing explanatory variables on the outcome.
Some examples:
• The observed outcomes are different variants of a disease such as hepatitis (possibly including "no disease" and/or other related diseases) in a set of patients, and the explanatory variables might be characteristics of the patients thought to be pertinent (sex, race, age, blood pressure, outcomes of various liver-function tests, etc.). The goal is then to predict which disease is causing the observed liver-related symptoms in a new patient.
• The observed outcomes are the party chosen by a set of people in an election, and the explanatory variables are the demographic characteristics of each person (e.g. sex, race, age, income, etc.). The goal is then to predict the likely vote of a new voter with given characteristics.
Linear predictor
As in other forms of linear regression, multinomial logistic regression uses a linear predictor function $f(k,i)$ to predict the probability that observation i has outcome k, of the following form:
$f(k,i)=\beta _{0,k}+\beta _{1,k}x_{1,i}+\beta _{2,k}x_{2,i}+\cdots +\beta _{M,k}x_{M,i},$
where $\beta _{m,k}$ is a regression coefficient associated with the mth explanatory variable and the kth outcome. As explained in the logistic regression article, the regression coefficients and explanatory variables are normally grouped into vectors of size M+1, so that the predictor function can be written more compactly:
$f(k,i)={\boldsymbol {\beta }}_{k}\cdot \mathbf {x} _{i},$
where ${\boldsymbol {\beta }}_{k}$ is the set of regression coefficients associated with outcome k, and $\mathbf {x} _{i}$ (a row vector) is the set of explanatory variables associated with observation i.
As a set of independent binary regressions
To arrive at the multinomial logit model, one can imagine, for K possible outcomes, running K-1 independent binary logistic regression models, in which one outcome is chosen as a "pivot" and then the other K-1 outcomes are separately regressed against the pivot outcome. If outcome K (the last outcome) is chosen as the pivot, the K-1 regression equations are:
$\ln {\frac {\Pr(Y_{i}=k)}{\Pr(Y_{i}=K)}}\,=\,{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}\;\;\;\;,\;\;k<K$.
This formulation is also known as the Additive Log Ratio transform commonly used in compositional data analysis. In other applications it’s referred to as “relative risk”.[7]
If we exponentiate both sides and solve for the probabilities, we get:
$\Pr(Y_{i}=k)\,=\,{\Pr(Y_{i}=K)}\;e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}\;\;\;\;,\;\;k<K$
Using the fact that all K of the probabilities must sum to one, we find:
$\Pr(Y_{i}=K)\,=\,1-\sum _{j=1}^{K-1}\Pr(Y_{i}=j)\,=\,1-\sum _{j=1}^{K-1}{\Pr(Y_{i}=K)}\;e^{{\boldsymbol {\beta }}_{j}\cdot \mathbf {X} _{i}}\;\;\Rightarrow \;\;\Pr(Y_{i}=K)\,=\,{\frac {1}{1+\sum _{j=1}^{K-1}e^{{\boldsymbol {\beta }}_{j}\cdot \mathbf {X} _{i}}}}$.
We can use this to find the other probabilities:
$\Pr(Y_{i}=k)={\frac {e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}{1+\sum _{j=1}^{K-1}e^{{\boldsymbol {\beta }}_{j}\cdot \mathbf {X} _{i}}}}\;\;\;\;,\;\;k<K$.
The fact that we run multiple regressions reveals why the model relies on the assumption of independence of irrelevant alternatives described above.
Estimating the coefficients
The unknown parameters in each vector βk are typically jointly estimated by maximum a posteriori (MAP) estimation, which is an extension of maximum likelihood using regularization of the weights to prevent pathological solutions (usually a squared regularizing function, which is equivalent to placing a zero-mean Gaussian prior distribution on the weights, but other distributions are also possible). The solution is typically found using an iterative procedure such as generalized iterative scaling,[8] iteratively reweighted least squares (IRLS),[9] by means of gradient-based optimization algorithms such as L-BFGS,[4] or by specialized coordinate descent algorithms.[10]
As a log-linear model
The formulation of binary logistic regression as a log-linear model can be directly extended to multi-way regression. That is, we model the logarithm of the probability of seeing a given output using the linear predictor as well as an additional normalization factor, the logarithm of the partition function:
$\ln \Pr(Y_{i}=k)={\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}-\ln Z\;\;\;\;,\;\;k\leq K$.
As in the binary case, we need an extra term $-\ln Z$ to ensure that the whole set of probabilities forms a probability distribution, i.e. so that they all sum to one:
$\sum _{k=1}^{K}\Pr(Y_{i}=k)=1$
The reason why we need to add a term to ensure normalization, rather than multiply as is usual, is because we have taken the logarithm of the probabilities. Exponentiating both sides turns the additive term into a multiplicative factor, so that the probability is just the Gibbs measure:
$\Pr(Y_{i}=k)={\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}\;\;\;\;,\;\;k\leq K$.
The quantity Z is called the partition function for the distribution. We can compute the value of the partition function by applying the above constraint that requires all probabilities to sum to 1:
$1=\sum _{k=1}^{K}\Pr(Y_{i}=k)\;=\;\sum _{k=1}^{K}{\frac {1}{Z}}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}\;=\;{\frac {1}{Z}}\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}$
Therefore:
$Z=\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}$
Note that this factor is "constant" in the sense that it is not a function of Yi, which is the variable over which the probability distribution is defined. However, it is definitely not constant with respect to the explanatory variables, or crucially, with respect to the unknown regression coefficients βk, which we will need to determine through some sort of optimization procedure.
The resulting equations for the probabilities are
$\Pr(Y_{i}=k)={\frac {e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}{\sum _{j=1}^{K}e^{{\boldsymbol {\beta }}_{j}\cdot \mathbf {X} _{i}}}}\;\;\;\;,\;\;k\leq K$.
Or generally:
$\Pr(Y_{i}=c)={\frac {e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}}{\sum _{j=1}^{K}e^{{\boldsymbol {\beta }}_{j}\cdot \mathbf {X} _{i}}}}$
The following function:
$\operatorname {softmax} (k,x_{1},\ldots ,x_{n})={\frac {e^{x_{k}}}{\sum _{i=1}^{n}e^{x_{i}}}}$
is referred to as the softmax function. The reason is that the effect of exponentiating the values $x_{1},\ldots ,x_{n}$ is to exaggerate the differences between them. As a result, $\operatorname {softmax} (k,x_{1},\ldots ,x_{n})$ will return a value close to 0 whenever $x_{k}$ is significantly less than the maximum of all the values, and will return a value close to 1 when applied to the maximum value, unless it is extremely close to the next-largest value. Thus, the softmax function can be used to construct a weighted average that behaves as a smooth function (which can be conveniently differentiated, etc.) and which approximates the indicator function
$f(k)={\begin{cases}1\;{\textrm {if}}\;k=\operatorname {\arg \max } (x_{1},\ldots ,x_{n}),\\0\;{\textrm {otherwise}}.\end{cases}}$
Thus, we can write the probability equations as
$\Pr(Y_{i}=c)=\operatorname {softmax} (c,{\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i},\ldots ,{\boldsymbol {\beta }}_{K}\cdot \mathbf {X} _{i})$
The softmax function thus serves as the equivalent of the logistic function in binary logistic regression.
Note that not all of the $\beta _{k}$ vectors of coefficients are uniquely identifiable. This is due to the fact that all probabilities must sum to 1, making one of them completely determined once all the rest are known. As a result, there are only $k-1$ separately specifiable probabilities, and hence $k-1$ separately identifiable vectors of coefficients. One way to see this is to note that if we add a constant vector to all of the coefficient vectors, the equations are identical:
${\begin{aligned}{\frac {e^{({\boldsymbol {\beta }}_{c}+C)\cdot \mathbf {X} _{i}}}{\sum _{k=1}^{K}e^{({\boldsymbol {\beta }}_{k}+C)\cdot \mathbf {X} _{i}}}}&={\frac {e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}e^{C\cdot \mathbf {X} _{i}}}{\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}e^{C\cdot \mathbf {X} _{i}}}}\\&={\frac {e^{C\cdot \mathbf {X} _{i}}e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}}{e^{C\cdot \mathbf {X} _{i}}\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}\\&={\frac {e^{{\boldsymbol {\beta }}_{c}\cdot \mathbf {X} _{i}}}{\sum _{k=1}^{K}e^{{\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}}}}\end{aligned}}$
As a result, it is conventional to set $C=-{\boldsymbol {\beta }}_{K}$ (or alternatively, one of the other coefficient vectors). Essentially, we set the constant so that one of the vectors becomes 0, and all of the other vectors get transformed into the difference between those vectors and the vector we chose. This is equivalent to "pivoting" around one of the K choices, and examining how much better or worse all of the other K-1 choices are, relative to the choice we are pivoting around. Mathematically, we transform the coefficients as follows:
${\begin{aligned}{\boldsymbol {\beta }}'_{k}&={\boldsymbol {\beta }}_{k}-{\boldsymbol {\beta }}_{K}\;\;\;,\;k<K\\{\boldsymbol {\beta }}'_{K}&=0\end{aligned}}$
This leads to the following equations:
$\Pr(Y_{i}=k)={\frac {e^{{\boldsymbol {\beta }}'_{k}\cdot \mathbf {X} _{i}}}{1+\sum _{j=1}^{K-1}e^{{\boldsymbol {\beta }}'_{j}\cdot \mathbf {X} _{i}}}}\;\;\;\;,\;\;k\leq K$
Other than the prime symbols on the regression coefficients, this is exactly the same as the form of the model described above, in terms of K-1 independent two-way regressions.
As a latent-variable model
It is also possible to formulate multinomial logistic regression as a latent variable model, following the two-way latent variable model described for binary logistic regression. This formulation is common in the theory of discrete choice models, and makes it easier to compare multinomial logistic regression to the related multinomial probit model, as well as to extend it to more complex models.
Imagine that, for each data point i and possible outcome k=1,2,...,K, there is a continuous latent variable Yi,k* (i.e. an unobserved random variable) that is distributed as follows:
$Y_{i,k}^{\ast }={\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}+\varepsilon _{k}\;\;\;\;,\;\;k\leq K$
where $\varepsilon _{k}\sim \operatorname {EV} _{1}(0,1),$ i.e. a standard type-1 extreme value distribution.
This latent variable can be thought of as the utility associated with data point i choosing outcome k, where there is some randomness in the actual amount of utility obtained, which accounts for other unmodeled factors that go into the choice. The value of the actual variable $Y_{i}$ is then determined in a non-random fashion from these latent variables (i.e. the randomness has been moved from the observed outcomes into the latent variables), where outcome k is chosen if and only if the associated utility (the value of $Y_{i,k}^{\ast }$) is greater than the utilities of all the other choices, i.e. if the utility associated with outcome k is the maximum of all the utilities. Since the latent variables are continuous, the probability of two having exactly the same value is 0, so we ignore the scenario. That is:
${\begin{aligned}\Pr(Y_{i}=1)&=\Pr(Y_{i,1}^{\ast }>Y_{i,2}^{\ast }{\text{ and }}Y_{i,1}^{\ast }>Y_{i,3}^{\ast }{\text{ and }}\cdots {\text{ and }}Y_{i,1}^{\ast }>Y_{i,K}^{\ast })\\\Pr(Y_{i}=2)&=\Pr(Y_{i,2}^{\ast }>Y_{i,1}^{\ast }{\text{ and }}Y_{i,2}^{\ast }>Y_{i,3}^{\ast }{\text{ and }}\cdots {\text{ and }}Y_{i,2}^{\ast }>Y_{i,K}^{\ast })\\\cdots &\\\Pr(Y_{i}=K)&=\Pr(Y_{i,K}^{\ast }>Y_{i,1}^{\ast }{\text{ and }}Y_{i,K}^{\ast }>Y_{i,2}^{\ast }{\text{ and }}\cdots {\text{ and }}Y_{i,K}^{\ast }>Y_{i,K-1}^{\ast })\\\end{aligned}}$
Or equivalently:
$\Pr(Y_{i}=k)\;=\;\Pr(\max(Y_{i,1}^{\ast },Y_{i,2}^{\ast },\ldots ,Y_{i,K}^{\ast })=Y_{i,k}^{\ast })\;\;\;\;,\;\;k\leq K$
Let's look more closely at the first equation, which we can write as follows:
${\begin{aligned}\Pr(Y_{i}=1)&=\Pr(Y_{i,1}^{\ast }>Y_{i,k}^{\ast }\ \forall \ k=2,\ldots ,K)\\&=\Pr(Y_{i,1}^{\ast }-Y_{i,k}^{\ast }>0\ \forall \ k=2,\ldots ,K)\\&=\Pr({\boldsymbol {\beta }}_{1}\cdot \mathbf {X} _{i}+\varepsilon _{1}-({\boldsymbol {\beta }}_{k}\cdot \mathbf {X} _{i}+\varepsilon _{k})>0\ \forall \ k=2,\ldots ,K)\\&=\Pr(({\boldsymbol {\beta }}_{1}-{\boldsymbol {\beta }}_{k})\cdot \mathbf {X} _{i}>\varepsilon _{k}-\varepsilon _{1}\ \forall \ k=2,\ldots ,K)\end{aligned}}$
There are a few things to realize here:
1. In general, if $X\sim \operatorname {EV} _{1}(a,b)$ and $Y\sim \operatorname {EV} _{1}(a,b)$ then $X-Y\sim \operatorname {Logistic} (0,b).$ That is, the difference of two independent identically distributed extreme-value-distributed variables follows the logistic distribution, where the first parameter is unimportant. This is understandable since the first parameter is a location parameter, i.e. it shifts the mean by a fixed amount, and if two values are both shifted by the same amount, their difference remains the same. This means that all of the relational statements underlying the probability of a given choice involve the logistic distribution, which makes the initial choice of the extreme-value distribution, which seemed rather arbitrary, somewhat more understandable.
2. The second parameter in an extreme-value or logistic distribution is a scale parameter, such that if $X\sim \operatorname {Logistic} (0,1)$ then $bX\sim \operatorname {Logistic} (0,b).$ This means that the effect of using an error variable with an arbitrary scale parameter in place of scale 1 can be compensated simply by multiplying all regression vectors by the same scale. Together with the previous point, this shows that the use of a standard extreme-value distribution (location 0, scale 1) for the error variables entails no loss of generality over using an arbitrary extreme-value distribution. In fact, the model is nonidentifiable (no single set of optimal coefficients) if the more general distribution is used.
3. Because only differences of vectors of regression coefficients are used, adding an arbitrary constant to all coefficient vectors has no effect on the model. This means that, just as in the log-linear model, only K-1 of the coefficient vectors are identifiable, and the last one can be set to an arbitrary value (e.g. 0).
Actually finding the values of the above probabilities is somewhat difficult, and is a problem of computing a particular order statistic (the first, i.e. maximum) of a set of values. However, it can be shown that the resulting expressions are the same as in above formulations, i.e. the two are equivalent.
Estimation of intercept
When using multinomial logistic regression, one category of the dependent variable is chosen as the reference category. Separate odds ratios are determined for all independent variables for each category of the dependent variable with the exception of the reference category, which is omitted from the analysis. The exponential beta coefficient represents the change in the odds of the dependent variable being in a particular category vis-a-vis the reference category, associated with a one unit change of the corresponding independent variable.
Application in natural language processing
In natural language processing, multinomial LR classifiers are commonly used as an alternative to naive Bayes classifiers because they do not assume statistical independence of the random variables (commonly known as features) that serve as predictors. However, learning in such a model is slower than for a naive Bayes classifier, and thus may not be appropriate given a very large number of classes to learn. In particular, learning in a Naive Bayes classifier is a simple matter of counting up the number of co-occurrences of features and classes, while in a maximum entropy classifier the weights, which are typically maximized using maximum a posteriori (MAP) estimation, must be learned using an iterative procedure; see #Estimating the coefficients.
See also
• Logistic regression
• Multinomial probit
References
1. Greene, William H. (2012). Econometric Analysis (Seventh ed.). Boston: Pearson Education. pp. 803–806. ISBN 978-0-273-75356-8.
2. Engel, J. (1988). "Polytomous logistic regression". Statistica Neerlandica. 42 (4): 233–252. doi:10.1111/j.1467-9574.1988.tb01238.x.
3. Menard, Scott (2002). Applied Logistic Regression Analysis. SAGE. p. 91. ISBN 9780761922087.
4. Malouf, Robert (2002). A comparison of algorithms for maximum entropy parameter estimation (PDF). Sixth Conf. on Natural Language Learning (CoNLL). pp. 49–55.
5. Belsley, David (1991). Conditioning diagnostics : collinearity and weak data in regression. New York: Wiley. ISBN 9780471528890.
6. Baltas, G.; Doyle, P. (2001). "Random Utility Models in Marketing Research: A Survey". Journal of Business Research. 51 (2): 115–125. doi:10.1016/S0148-2963(99)00058-2.
7. Stata Manual “mlogit — Multinomial (polytomous) logistic regression”
8. Darroch, J.N. & Ratcliff, D. (1972). "Generalized iterative scaling for log-linear models". The Annals of Mathematical Statistics. 43 (5): 1470–1480. doi:10.1214/aoms/1177692379.
9. Bishop, Christopher M. (2006). Pattern Recognition and Machine Learning. Springer. pp. 206–209.
10. Yu, Hsiang-Fu; Huang, Fang-Lan; Lin, Chih-Jen (2011). "Dual coordinate descent methods for logistic regression and maximum entropy models" (PDF). Machine Learning. 85 (1–2): 41–75. doi:10.1007/s10994-010-5221-8.
| Wikipedia |
\begin{document}
\title{Exact Penalty Functions for Optimal Control Problems {I}
\begin{abstract} The second part of our study is devoted to an analysis of the exactness of penalty functions for optimal control problems with terminal and pointwise state constraints. We demonstrate that with the use of the exact penalty function method one can reduce fixed-endpoint problems for linear time-varying systems and linear evolution equations with convex constraints on the control inputs to completely equivalent free-endpoint optimal control problems, if the terminal state belongs to the relative interior of the reachable set. In the nonlinear case, we prove that a local reduction of fixed-endpoint and variable-endpoint problems to equivalent free-endpoint ones is possible under the assumption that the linearised system is completely controllable, and point out some general properties of nonlinear systems under which a global reduction to equivalent free-endpoint problems can be achieved. In the case of problems with pointwise state inequality constraints, we prove that such problems for linear time-varying systems and linear evolution equations with convex state constraints can be reduced to equivalent problems without state constraints, provided one uses the $L^{\infty}$ penalty term, and Slater's condition holds true, while for nonlinear systems a local reduction is possible, if a natural constraint qualification is satisfied. Finally, we show that the exact $L^p$-penalisation of state constraints with finite $p$ is possible for convex problems, if Lagrange multipliers corresponding to the state constraints belong to $L^{p'}$, where $p'$ is the conjugate exponent of $p$, and for general nonlinear problems, if the cost functional does not depend on the control inputs explicitly. \end{abstract}
\section{Introduction}
The exact penalty method is an important tool for solving constrained optimisation problems. Many publications have been devoted to its analysis from various perspectives (see, e.g. References \cite{EvansGouldTolle,HanMangasarian,DiPilloGrippo86,DiPilloGrippo88,DiPilloGrippo89,Burke91,DiPillo94,ExactBarrierFunc, Zaslavski,Dolgopolik_ExPen_I, Dolgopolik_ExPen_II,Strekalovsky2019}). The main idea of this method consist in the reduction of a constrained optimisation problem, say \begin{equation} \label{MathProgram_Intro}
\min_{x \in \mathbb{R}^d} f(x) \quad \text{subject to} \quad g_i(x) \le 0, \quad i \in \{ 1, \ldots, n \} \end{equation} to the unconstrained optimisation problem of minimising the nonsmooth penalty function: $$
\min_{x \in \mathbb{R}^d} \Phi_{\lambda}(x) = f(x) + \lambda \sum_{i = 1}^n \max\{ g_i(x), 0 \}. $$ Under some natural assumptions such as the coercivity of $\Phi_{\lambda}$ and the validity of a suitable constraint qualification one can prove that for any sufficiently large (but finite) value of the penalty parameter $\lambda$ the penalised problem is equivalent to the original problem in the sense that these problems have the same optimal value and the same globally optimal solutions. In this case the penalty function $\Phi_{\lambda}$ is called \textit{exact}. Under some additional assumptions not only globally optimal solutions, but also locally optimal solutions and stationary (critical) points of these problems coincide. In this case the penalty function $\Phi_{\lambda}$ is called \textit{completely exact}. Finally, if a given locally optimal solution of problem \eqref{MathProgram_Intro} is a point of local minimum of $\Phi_{\lambda}$ for any sufficiently large $\lambda$, then $\Phi_{\lambda}$ is said to be \textit{locally exact} at this solution.
Thus, the exactness property of a penalty function allows one to reduce (locally or globally) a constrained optimisation problem to the equivalent unconstrained problem of minimising a penalty function and, as a result, apply numerical methods of unconstrained optimisation to constrained problems. However, note that exact penalty functions not depending on the derivatives of the objective function and constraints are inherently nonsmooth (see, e.g. \cite[Remark~3]{Dolgopolik_ExPen_I}), and one has to either utilise general methods of nonsmooth optimisation to minimise exact penalty functions or develop specific methods for minimising such functions that take into account their structure. See~the works of M\"{a}kel\"{a} et al.\cite{Makela2002,KarmitsaBagriovMakela2012,BagirovKarmitsaMakela_book} for a survey and comparative analysis of modern nonsmooth optimisation methods and software.
Numerical methods for solving optimal control problems based on exact penalty functions were developed by Maratos \cite{MaratosPHD} and in a series of papers by Mayne et al. \cite{MaynePolak80,MayneSmith83,MaynePolak85,MaynePolak87,SmithMayne88} (see also monograph \cite{Polak_book}). An exact penalty method for optimal control problems with delay was proposed by Wong and Teo \cite{WongTeo91}, and such method for some nonsmooth optimal control problems was studied in the works of Outrata et al. \cite{Outrata83,OutrataSchindler,Outrata88}. A continuous numerical method for optimal control problems based on the direct minimisation of an exact penalty function was considered in recent paper \cite{FominyhKarelin2018}. Finally, closely related methods based on Huyer and Neumaier's exact penalty function\cite{HuyerNeumaier,WangMaZhou,Dolgopolik_ExPen_II} were developed for optimal control problems with state inequality constraints\cite{LiYu2011,JiangLin2012} and optimal feedback control problems\cite{LinLoxton2014}.
Despite the abundance of publications on exact penalty methods for optimal control problems, relatively little attention has been paid to an actual analysis of the exactness of penalty functions for such problems. To the best of author's knowledge, the possibility of the exact penalisation of pointwise state constraints for optimal control problems was first mentioned by Luenberger \cite{Luenberger70}; however, no particular conditions ensuring exact penalisation were given in this paper. Lasserre \cite{Lasserre} proved that a stationary point of an optimal control problem with endpoint equality and state inequality constraints is also a stationary point of a nonsmooth penalty function for this problem (this result is closely related to the local exactness). The local exactness of a penalty function for problems with state inequality constraints was proved by Xing et al. \cite{Xing89,Xing94} under the assumption that certain second order sufficient optimality conditions are satisfied. First results on the \textit{global} exactness of penalty functions for optimal control problems were probably obtained by Demyanov et al. for a problem of finding optimal parameters in a system described by ordinary differential equations \cite{DemyanovKarelin98}, free-endpoint optimal control problems \cite{DemyanovKarelin2000_InCollect,DemyanovKarelin2000,Karelin}, and certain optimal control problems for implicit control systems \cite{DemyanovTamasyan2005}. However, the main results of these papers are based on the assumptions that the penalty function attains a global minimum in the space of piecewise continuous functions for any sufficiently large value of the penalty parameter, and the cost functional is Lipschitz continuous on a possibly unbounded and rather complicated set. It is unclear how to verify these assumptions in particular cases, which makes it very difficult to apply the main results of papers \cite{DemyanovKarelin98,DemyanovKarelin2000_InCollect,DemyanovKarelin2000,Karelin,DemyanovTamasyan2005} to real problems. To the best of author's knowledge, the only verifiable sufficient conditions for the global exactness of penalty functions for optimal control problems were obtained by Gugat\cite{Gugat} for an optimal control of the wave equation, by Gugat and Zuazua\cite{Zuazua} for optimal control problems for general linear evolution equations, and by Jayswal and Preeti\cite{Jayswal} for a PDE constrained optimal control problem with state inequality constraints. In papers\cite{Gugat,Zuazua} only the exact penalisation of terminal constraint was analysed, while in article\cite{Jayswal} the exact penalisation of state inequality constraints and system dynamics was proved under some restrictive convexity assumptions.
The main goal of this two-part study is to develop a general theory of exact penalty functions for optimal control problems containing sufficient conditions for the complete or local exactness of penalty functions that can be readily verified in various particular cases. In the first part of our study\cite{DolgopolikFominyh} we obtained simple sufficient conditions for the exactness of penalty functions for free-endpoint optimal control problems. This result allows one to apply numerical method for solving variational problems to free-endpoint optimal control problems.
In the second part of our study we analyse the exactness of penalty functions for problems with terminal and pointwise state constraints. In the first half of this paper, we study when a penalisation of the terminal constraint is exact, i.e. when the fixed-endpoint problem \begin{align*}
&\min_{(x, u)} \: \mathcal{I}(x, u) = \int_0^T \theta(x(t), u(t), t) \, dt \\
&\text{subject to} \quad \dot{x}(t) = f(x(t), u(t), t), \quad t \in [0, T], \quad
x(0) = x_0, \quad x(T) = x_T, \quad u \in U \end{align*} is equivalent to the penalised free-endpoint one \begin{align*}
&\min_{(x, u)} \: \Phi_{\lambda}(x, u) = \int_0^T \theta(x(t), u(t), t) \, dt + \lambda | x(T) - x_T | \\
&\text{subject to} \quad \dot{x}(t) = f(x(t), u(t), t), \quad t \in [0, T], \quad
x(0) = x_0, \quad u \in U. \end{align*} We prove that fixed-endpoint problems for linear time-varying systems and linear evolution equations in Hilbert spaces with convex constraints on the control inputs (i.e. the set $U$ is convex) are equivalent to the corresponding penalised free-endpoint problems, if the terminal state $x_T$ belongs to the relative interior of the reachable set. This result significantly generalises the one of Gugat and Zuazua \cite{Zuazua} (see Remark~\ref{Remark_ComparisonZuazua} for a detailed discussion). In the case of nonlinear problems, we show that the penalty function $\Phi_{\lambda}$ is locally exact at a given locally optimal solution, if the corresponding linearised system is completely controllable, and point our some general assumption on the system $\dot{x} = f(x, u, t)$ that ensure the complete exactness of $\Phi_{\lambda}$. We also present an extension of these results to the case of variable-endpoint problems.
In the second half of this paper, we study the exact penalisation of pointwise state constraints, i.e. we study when the optimal control problem with pointwise state inequality constraints \begin{align*}
&\min_{(x, u)} \: \mathcal{I}(x, u) = \int_0^T \theta(x(t), u(t), t) \, dt \quad
\text{subject to} \quad \dot{x}(t) = f(x(t), u(t), t), \quad t \in [0, T], \\
&x(0) = x_0, \quad x(T) = x_T, \quad u \in U, \quad
g_j(x(t), t) \le 0 \quad \forall t \in [0, T], \quad j \in \{ 1, \ldots, l \} \end{align*} is equivalent to the penalised problem without state constraints \begin{align*}
&\min_{(x, u)} \: \Phi_{\lambda}(x, u) = \int_0^T \theta(x(t), u(t), t) \, dt +
\lambda \| \max\{ g_1(x(\cdot), \cdot), \ldots, g_l(\cdot), \cdot), 0 \} \|_p \\
&\text{subject to} \quad \dot{x}(t) = f(x(t), u(t), t), \quad t \in [0, T], \quad
x(0) = x_0, \quad x(T) = x_T, \quad u \in U \end{align*}
for some $1 \le p \le + \infty$ (here $\| \cdot \|_p$ is the standard norm in $L^p(0, T)$). In the case of problems for linear time-varying systems and linear evolution equation with convex state constraints, we prove that the penalisation of state constraints is exact, if $p = + \infty$, and Slater's condition holds true, i.e. there exists a feasible point $(x, u)$ such that $g_j(x(t), t) < 0$ for all $t \in [0, T]$ and $j \in \{ 1, \ldots, l \}$. In the nonlinear case, we prove the local exactness of $\Phi_{\lambda}$ with $p = + \infty$ under the assumption that a suitable constraint qualification is satisfied. Finally, we demonstrate that under some additional assumptions the exact $L^p$ penalisation of state constraints with finite $p$ is possible for convex problems, if Lagrange multipliers corresponding to state constraints belong to $L^{p'}(0, T)$, and for nonlinear problems, if the cost functional $\mathcal{I}$ does not depend on the control inputs $u$ explicitly.
The paper is organised as follows. Some basic definitions and results from the general theory of exact penalty functions for optimisation problems in metric spaces are collected in Section~\ref{Sect_ExactPenaltyFunctions}, so that the second part of the paper can be read independently of the first one. Section~\ref{Sect_ExactPen_TerminalConstraint} is devoted to the analysis of exact penalty functions for fixed-endpoint and variable-endpoint problems, while exact penalty functions for optimal control problems with state constraints are considered in Section~\ref{Sect_ExactPen_StateConstraint}. Finally, a proof of a general theorem on completely exact penalty function from Section~\ref{Sect_ExactPenaltyFunctions} is given in Appendix~A, while Appendix~B contains some useful results on Nemytskii operators that are utilised throughout the paper.
\section{Exact Penalty Functions in Metric Spaces} \label{Sect_ExactPenaltyFunctions}
In this section we recall some basic definitions and results from the theory of exact penalty functions that will be utilised throughout the article (see papers\cite{Dolgopolik_ExPen_I,Dolgopolik_ExPen_II} for more details). Let $(X, d)$ be a metric space, $M, A \subseteq X$ be nonempty sets such that $M \cap A \ne \emptyset$, and $\mathcal{I} \colon X \to \mathbb{R} \cup \{ + \infty \}$ be a given function. Consider the following optimisation problem: $$
\min_{x \in X} \: \mathcal{I}(x) \quad \text{subject to} \quad x \in M \cap A. \eqno{(\mathcal{P})} $$ Here the sets $M$ and $A$ respresent two different types of constraints, e.g. pointwise and terminal constraints or linear and nonlinear constraints, etc. In what follows, we suppose that there exists a globally optimal solution $x^*$ of the problem $(\mathcal{P})$ such that $\mathcal{I}(x^*) < + \infty$, i.e. the optimal value of this problem is finite and is attained.
Our aim is to ``get rid'' of the constraint $x \in M$ without losing any essential information about (locally or globally) optimal solutions of the problem $(\mathcal{P})$. To this end, we apply the exact penalty function technique. Let $\varphi \colon X \to [0, + \infty]$ be a function such that $\varphi(x) = 0$ iff $x \in M$. For example, if $M$ is closed, one can put $\varphi(x) = \dist(x, M) = \inf_{y \in M} d(x, y)$. For any $\lambda \ge 0$ define $\Phi_{\lambda}(x) = \mathcal{I}(x) + \lambda \varphi(x)$. The function $\Phi_{\lambda}$ is called \textit{a penalty function} for the problem $(\mathcal{P})$ (corresponding to the constraint $x \in M$), $\lambda$ is called \textit{a penalty parameter}, and $\varphi$ is called \textit{a penalty term} for the constraint $x \in M$.
Observe that the function $\Phi_{\lambda}(x)$ is non-decreasing in $\lambda$, and $\Phi_{\lambda}(x) \ge \mathcal{I}(x)$ for all $x \in X$ and $\lambda \ge 0$. Furthermore, for any $\lambda > 0$ one has $\Phi_{\lambda}(x) = \mathcal{I}(x)$ iff $x \in M$. Therefore, it is natural to consider \textit{the penalised problem} \begin{equation} \label{PenalizedProblem}
\min_{x \in X} \Phi_{\lambda}(x) \quad \text{subject to} \quad x \in A. \end{equation} Note that this problem has only one constraint ($x \in A$), while the constraint $x \in M$ is incorporated into the new objective function $\Phi_{\lambda}$. We would like to know when this problem is, in some sense, equivalent to the problem $(\mathcal{P})$, i.e. when the penalty function $\Phi_{\lambda}$ is \textit{exact}.
\begin{definition} The penalty function $\Phi_{\lambda}$ is called (globally) \textit{exact}, if there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ the set of globally optimal solutions of the penalised problem \eqref{PenalizedProblem} coinsides with the set of globally optimal solutions of the problem $(\mathcal{P})$. \end{definition}
From the fact that $\Phi_{\lambda}(x) = \mathcal{I}(x)$ for any feasible point $x$ of the problem $(\mathcal{P})$ it follows that if $\Phi_{\lambda}$ is globally exact, then the optimal values of the problems $(\mathcal{P})$ and \eqref{PenalizedProblem} coincide. Thus, the penalty function $\Phi_{\lambda}$ is globally exact iff the problems $(\mathcal{P})$ and \eqref{PenalizedProblem} are equivalent in the sense that they have the same globally optimal solutions and the same optimal value. However, optimisation methods often can find only locally optimal solutions (or even only stationary/critical points) of an optimisation problem. Therefore, the concept of the global exactness of the penalty function $\Phi_{\lambda}$ is not entirely satisfactory for practical applications. One needs to ensure that not only globally optimal solutions, but also local minimisers and stationary points of the problems $(\mathcal{P})$ and \eqref{PenalizedProblem} coincide. To provide conditions under which such \textit{complete exactness} takes place we need to recall the definitions of the \textit{rate of steepest descent}\cite{Demyanov2000,Demyanov2010,Uderzo} and \textit{inf-stationary point}\cite{Demyanov2000,Demyanov2010} of a function defined on a metric space.
Let $K \subset X$ and $f \colon X \to \mathbb{R} \cup \{ + \infty \}$ be given, and $x \in K$ be such that $f(x) < + \infty$. The quantity $$
f^{\downarrow}_K(x) = \liminf_{y \to x, y \in K} \frac{f(y) - f(x)}{d(y, x)} $$
is called \text{the rate of steepest descent} of $f$ with respect to the set $K$ at the point $x$. If $x$ is an isolated point of $K$, then by definition $f^{\downarrow}_K(x) = + \infty$. It should be noted that the rate of steepest descent of $f$ at $x$ is closely connected to the so-called strong slope $|\nabla f|(x)$ of $f$ at $x$. See papers\cite{Dolgopolik_ExPen_II,Aze,Kruger} for some calculus rules for strong slope/rate of steepest descent, and the ways one can estimate them in various particular cases.
Let $x^* \in K$ be such that $f(x^*) < + \infty$. The point $x^*$ is called an \textit{inf-stationary} point of $f$ on the set $K$ if $f^{\downarrow}_K(x^*) \ge 0$. Observe that the inequality $f^{\downarrow}_K(x^*) \ge 0$ is a necessary optimality condition for the problem $$
\min_{x \in X} \: f(x) \quad \text{subject to} \quad x \in K. $$ In the case when $X$ is a normed space, $K$ is convex, and $f$ is Fr\'{e}chet differentiable at $x^*$ the inequality $f^{\downarrow}_K(x^*) \ge 0$ is reduced to the standard optimality condition: $f'(x^*)[x - x^*] \ge 0$ for all $x \in K$, where $f'(x^*)$ is the Fr\'{e}chet derivative of $f$ at $x^*$.
Now we can formulate sufficient conditions for the complete exactness of the penalty function $\Phi_{\lambda}$. For any $\lambda \ge 0$ and $c \in \mathbb{R}$ denote $S_{\lambda}(c) = \{ x \in A \mid \Phi_{\lambda}(x) < c \}$. Let also $\Omega = M \cap A$ be the feasible region of $(\mathcal{P})$, and for any $\delta > 0$ define $\Omega_{\delta} = \{ x \in A \mid \varphi(x) < \delta \}$.
\begin{theorem} \label{Theorem_CompleteExactness} Let $X$ be a complete metric space, $A$ be closed, $\mathcal{I}$ and $\varphi$ be lower semicontinuous on $A$, and $\varphi$ be continuous at every point of the set $\Omega$. Suppose also that there exist $c > \mathcal{I}^* = \inf_{x \in \Omega} \mathcal{I}(x)$, $\lambda_0 > 0$, and $\delta > 0$ such that \begin{enumerate} \item{there exists an open set $V$ such that $S_{\lambda_0}(c) \cap \Omega_{\delta} \subset V$ and the functional $\mathcal{I}$ is Lipschitz continuous on $V$; }
\item{there exists $a > 0$ such that $\varphi^{\downarrow}_A(x) \le - a$ for all $x \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$; \label{NegativeDescentRateAssumpt}}
\item{$\Phi_{\lambda_0}$ is bounded below on $A$. } \end{enumerate} Then there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ the following statements hold true: \begin{enumerate} \item{the optimal values of the problems $(\mathcal{P})$ and \eqref{PenalizedProblem} coincide; }
\item{globally optimal solutions of the problems $(\mathcal{P})$ and \eqref{PenalizedProblem} coincide; }
\item{$x^* \in S_{\lambda}(c)$ is a locally optimal solution of the penalised problem \eqref{PenalizedProblem} iff $x^* \in \Omega$, and it is a locally optimal solution of the problem $(\mathcal{P})$; }
\item{$x^* \in S_{\lambda}(c)$ is an inf-stationary point of $\Phi_{\lambda}$ on $A$ iff $x^* \in \Omega$, and it is an inf-stationary point of $\mathcal{I}$ on $\Omega$. } \end{enumerate} \end{theorem}
If the penalty function $\Phi_{\lambda}$ satisfies the four statements of this theorem, then it is said to be \textit{completely exact} on the set $S_{\lambda}(c)$. The proof of Theorem~\ref{Theorem_CompleteExactness} is given in the first part of our study\cite{DolgopolikFominyh}.
\begin{remark} \label{Remark_OmegaDeltaEmpty} Let us note that Theorem~\ref{Theorem_CompleteExactness} is valid even in the case when the set $\Omega_{\delta} \setminus \Omega$ is empty. Moreover, if $\Omega_{\delta} \setminus \Omega = \emptyset$ for some $\delta > 0$, then the penalty function $\Phi_{\lambda}$ is completely exact on $S_{\lambda}(c)$ for any $c > \mathcal{I}^*$, provided there exists $\lambda_0 \ge 0$ such that $\Phi_{\lambda_0}$ is bounded below on $A$. Indeed, in this case for any $\lambda \ge \lambda_0$ and $x \notin \Omega_{\delta}$ one has $$
\Phi_{\lambda}(x) = \Phi_{\lambda_0}(x) + (\lambda - \lambda_0) \varphi(x)
\ge \eta + (\lambda - \lambda_0) \delta \ge c \quad \forall \lambda \ge \lambda^* = \lambda_0 + (c - \eta) / \delta, $$ where $\eta = \inf_{x \in A} \Phi_{\lambda_0}(x)$, which implies that $S_{\lambda}(c) \subseteq \Omega$ for any $\lambda \ge \lambda^*$. Hence taking into account the fact that $\Phi_{\lambda}(x) = \mathcal{I}(x)$ for any $x \in \Omega$ one obtains that the first two statements of Theorem~\ref{Theorem_CompleteExactness} hold true, and if $x^* \in S_{\lambda}(c)$ is a local minimiser/inf-stationary point of $\Phi_{\lambda}$ on $A$, then $x^* \in \Omega$ and it is a local minimiser/inf-stationary point of $\mathcal{I}$ on $\Omega$, provided $\lambda \ge \lambda^*$. On the other hand, if $\lambda \ge \lambda^*$ and $x^* \in S_{\lambda}(c)$ is a locally optimal solution of the problem $(\mathcal{P})$, then for any $x$ in a neighbourhood of $x^*$, either $x \in \Omega$ and $\Phi_{\lambda}(x) = \mathcal{I}(x) \ge \mathcal{I}(x^*) = \Phi_{\lambda}(x^*)$ or $x \notin \Omega$ and $\Phi_{\lambda}(x) \ge c > \Phi_{\lambda}(x^*)$, i.e. $x^*$ is a locally optimal solution of the penalised problem \eqref{PenalizedProblem}. The analogous statement for inf-stationary points is proved in a similar way. \end{remark}
Under the assumptions of Theorem~\ref{Theorem_CompleteExactness} nothing can be said about locally optimal solutions of the penalised problem and inf-stationary points of $\Phi_{\lambda}$ on $A$ that do not belong to the sublevel set $S_{\lambda}(c)$. If a numerical method for minimising the penalty function $\Phi_{\lambda}$ finds a point $x^* \notin S_{\lambda}(c)$, then this point might even be infisible for the original problem (in this case, usually, either constraints are degenerate in some sense at $x^*$ or $\mathcal{I}$ is not Lipschitz continuous near this point). Under more restrictive assumptions one can exclude such possibility, i.e. prove that the penalty function $\Phi_{\lambda}$ is \textit{completely exact on} $A$, i.e. on $S_{\lambda}(c)$ with $c = + \infty$. Namely, the following theorem holds true.\footnote{This result as well as its applications in the following sections were inspired by a question raised by one of the reviewers of the first part of our study. The author wishes to express his gratitude to the reviewer for raising this question.} Its proof is given in Appendix~A.
\begin{theorem} \label{THEOREM_COMPLETEEXACTNESS_GLOBAL} Let $X$ be a complete metric space, $A$ be closed, $\mathcal{I}$ be Lipschitz continuous on $A$, and $\varphi$ be lower semicontinuous on $A$ and continuous at every point of the set $\Omega$. Suppose also that there exists $a > 0$ such that $\varphi^{\downarrow}_A(x) \le - a$ for all $x \in A \setminus \Omega$, and the function $\Phi_{\lambda_0}$ is bounded below on $A$ for some $\lambda_0 \ge 0$. Then the penalty function $\Phi_{\lambda}$ is completely exact on $A$. \end{theorem}
In some important cases it might be very difficult (if at all possible) to verify the assumptions of Theorems~\ref{Theorem_CompleteExactness} and \ref{THEOREM_COMPLETEEXACTNESS_GLOBAL} and prove the complete exactness of the penalty function $\Phi_{\lambda}$. In these cases one can try to check whether $\Phi_{\lambda}$ is at least \textit{locally} exact.
\begin{definition} Let $x^*$ be a locally optimal solution of the problem $(\mathcal{P})$. The penalty function $\Phi_{\lambda}$ is said to be \textit{locally exact} at $x^*$, if there exists $\lambda^*(x^*) \ge 0$ such that $x^*$ is a point of local minimum of the penalised problem \eqref{PenalizedProblem} for any $\lambda \ge \lambda^*(x^*)$. \end{definition}
Thus, if the penalty function $\Phi_{\lambda}$ is locally exact at a locally optimal solution $x^*$, then one can ``get rid'' of the constraint $x \in M$ in a neighbourhood of $x^*$ with the use of the penalty function $\Phi_{\lambda}$, since by definition $x^*$ is a local minimiser of $\Phi_{\lambda}$ on $A$ for any sufficiently large $\lambda$. The following theorem, which is a particular case of \cite[Theorem~2.4 and Proposition~2.7]{Dolgopolik_ExPen_I}, contains simple sufficient conditions for the local exactness. Let $B(x, r) = \{ y \in X \mid d(x, y) \le r \}$ for any $x \in X$ and $r > 0$.
\begin{theorem} \label{Theorem_LocalExactness} Let $x^*$ be a locally optimal solution of the problem $(\mathcal{P})$. Suppose also that $\mathcal{I}$ is Lipschitz continuous near $x^*$ with Lipschitz constant $L > 0$, and there exist $r > 0$ and $a > 0$ such that \begin{equation} \label{PenaltyTerm_LocalErrorBound}
\varphi(x) \ge a \dist(x, \Omega) \quad \forall x \in B(x^*, r) \cap A. \end{equation} Then the penalty function $\Phi_{\lambda}$ is locally exact at $x^*$ with $\lambda^*(x^*) \le L / a$. \end{theorem}
Let us also point out a useful result \cite[Corollary~2.2]{Cominetti}) that allows one to easily verify inequality \eqref{PenaltyTerm_LocalErrorBound} for a large class of optimisation and optimal control problems.
\begin{theorem} \label{Theorem_LocalErrorBound} Let $X$ and $Y$ be Banach spaces, $C \subseteq X$ and $K \subset Y$ be closed convex sets, and $F \colon X \to Y$ be a given mapping. Suppose that $F$ is strictly differentiable at a point $x^* \in C$ such that $F(x^*) \in K$, $D F(x^*)$ is its Fr\'{e}chet derivative at $x^*$, and \begin{equation} \label{MetricRegCond}
0 \in \core\Big[ DF(x^*)(C - x^*) - (K - F(x^*)) \Big], \end{equation} where ``$\core$'' is the algebraic interior. Then there exist $r > 0$ and $a > 0$ such that $$
\dist(F(x), K) \ge a \dist( x, F^{-1}(K) \cap C) \quad \forall x \in B(x^*, r) \cap C. $$ \end{theorem}
\begin{remark} Let $C$, $K$, and $F$ be as in the previous theorem. Suppose that $A = C$ and $M = \{ x \in X \mid F(x) \in K \}$. Then $\Omega = F^{-1}(K) \cap C$, and one can define $\varphi(\cdot) = \dist(F(\cdot), K)$. In this case under the assumptions of Theorem~\ref{Theorem_LocalErrorBound} constraint qualification \eqref{MetricRegCond} guarantees that $\varphi(x) \ge a \dist( x, F^{-1}(K) \cap C) = a \dist( x, \Omega )$ for all $x \in B(x^*, r) \cap A$, i.e. \eqref{PenaltyTerm_LocalErrorBound} holds true. \end{remark}
In the linear case, the following nonlocal version of Robinson-Ursescu's theorem due to Robinson \cite[Theorems~1 and 2]{Robinson76} (see also~\cite{Cominetti,Ioffe}) is very helpful for verifying inequality \eqref{PenaltyTerm_LocalErrorBound} and the exactness of penalty functions.
\begin{theorem}[Robinson] \label{Theorem_Robinson_Ursescu} Let $X$ and $Y$ be Banach spaces, $\mathcal{T} \colon X \to Y$ be a bounded linear operator, and $C \subset X$ be a closed convex set. Suppose that $x^* \in C$ is such that the point $y^* = \mathcal{T} x^*$ belongs to the iterior $\interior(\mathcal{T}(C))$ of the set $\mathcal{T}(C)$. Then there exist $r > 0$ and $\kappa > 0$ such that $$
\dist( x, \mathcal{T}^{-1}(y) \cap C ) \le \kappa \big( 1 + \| x - x^* \| \big) \| \mathcal{T} x - y \|
\qquad \forall x \in C \quad \forall y \in B(y^*, r). $$ \end{theorem}
In the following sections we employ Theorems~\ref{Theorem_CompleteExactness}--\ref{Theorem_Robinson_Ursescu} to verify complete or local exactness of penalty function for optimal control problems with terminal and state constraints.
\begin{remark} In our exposition of the theory of exact penalty functions we mainly followed papers\cite{Dolgopolik_ExPen_I,Dolgopolik_ExPen_II}. A completely different approach to an analysis of the \textit{global} exactness of exact penalty functions based on the Palais-Smale condition was developed by Zaslavski\cite{Zaslavski}. It seems possible to apply the main results of monograph\cite{Zaslavski} to obtain sufficient conditions for the global exactness of penalty functions for some optimal control problems that significantly differ from the ones obtained in this article. A derivation of such conditions lies beyond the scope of this article, and we leave it as an interesting open problem for future research. \end{remark}
\section{Exact Penalisation of Terminal Constraints} \label{Sect_ExactPen_TerminalConstraint}
In this section we analyse exact penalty functions for fixed-endpoint optimal control problems, including such problems for linear evolution equations in Hilbert spaces. Our aim is to convert a fixed-endpoint problem into a free-endpoint one by penalising the terminal constraint and obtain conditions under which the penalised free-endpoint problem is equivalent (locally or globally) to the original one. The main results of this section allow one to apply methods for solving free-endpoint optimal control problems to fixed-endpoint problems.
\subsection{Notation}
Let us introduce notation first. Denote by $L_q^m(0, T)$ the Cartesian product of $m$ copies of $L^q(0, T)$, and let $W_{1, p}^d(0, T)$ be the Cartesian product of $d$ copies of the Sobolev space $W^{1, p}(0, T)$. Here $1 \le q, p \le + \infty$. As usual (see, e.g. \cite{Leoni}), we identify the Sobolev space $W^{1, p}(0, T)$ with the space consisting of all those absolutely continuous functions $x \colon [0, T] \to \mathbb{R}$ for which $\dot{x} \in L^p(0, T)$. The space $L_q^m(0, T)$ is equipped with the norm
$\| u \|_q = ( \int_0^T |u(t)|^q \, dt)^{1/q}$, when $1 \le q < + \infty$ (here $| \cdot |$ is the Euclidean norm), while the space $L_{\infty}^m (0, T)$ is equipped with the norm $\| u \|_{\infty} = \esssup_{t \in [0, T]}|u(t)|$. The Sobolev space $W_{1, p}^d(0, T)$ is endowed with the norm $\| x \|_{1, p} = \| x \|_p + \| \dot{x} \|_p$. Let us note that by the Sobolev imbedding theorem (see, e.g. \cite[Theorem~5.4]{Adams}) for any $p \in [1, + \infty]$ there exists $C_p > 0$ such that \begin{equation} \label{SobolevImbedding}
\| x \|_{\infty} \le C_p \| x \|_{1, p} \quad \forall x \in W^d_{1, p}(0, T), \end{equation}
which, in particular, implies that any bounded set in $W^d_{1, p}(0, T)$ is also bounded in $L_{\infty}^d(0, T)$. In what follows we suppose that the Cartesian product $X \times Y$ of normed spaces $X$ and $Y$ is endowed with the norm $\| (x, y) \| = \| x \|_X + \| y \|_Y$. For any $r \in [1, + \infty]$ denote by $r' \in [1, + \infty]$ the \textit{conjugate exponent} of $r$, i.e. $1 / r + 1 / r' = 1$.
Let $g \colon \mathbb{R}^d \times \mathbb{R}^m \times [0, T] \to \mathbb{R}^k$ be a given function. We say that $g$ satisfies \textit{the growth condition} of order $(l, s)$ with $0 \le l < + \infty$ and $1 \le s \le + \infty$, if for any $R > 0$ there exist $C_R > 0$ and an a.e. nonnegative function $\omega_R \in L^s(0, T)$ such that
$|g(x, u, t)| \le C_R |u|^l + \omega_R(t)$ for a.e. $t \in [0, T]$ and for all
$(x, u) \in \mathbb{R}^d \times \mathbb{R}^m$ with $|x| \le R$.
Finally, if the function $g = g(x, u, t)$ is differentiable, then the gradient of the function $x \mapsto g(x, u, t)$ is denoted by $\nabla_x g(x, u, t)$, and a similar notation is used for the gradient of the function $u \mapsto g(x, u, t)$.
\subsection{Linear Time-Varying Systems}
We start our analysis with the linear case, since in this case the complete exactness of the penalty function can be obtained without any assumptions on the controllability of the system. Consider the following fixed-endpoint optimal control problem: \begin{equation} \label{LinearFixedEndPointProblem} \begin{split}
{}&\min \: \mathcal{I}(x, u) = \int_0^T \theta(x(t), u(t), t) \, dt \\
{}&\text{subject to } \dot{x}(t) = A(t) x(t) + B(t) u(t), \quad t \in [0, T], \quad u \in U, \quad
x(0) = x_0, \quad x(T) = x_T. \end{split} \end{equation} Here $x(t) \in \mathbb{R}^d$ is the system state at time $t$, $u(\cdot)$ is a control input, $\theta \colon \mathbb{R}^d \times \mathbb{R}^m \times [0, T] \to \mathbb{R}$, $A \colon [0, T] \to \mathbb{R}^{d \times d}$, and $B \colon [0, T] \to \mathbb{R}^{d \times m}$ are given functions, $T > 0$ and $x_0, x_T \in \mathbb{R}^d$ are fixed. We suppose that $x \in W^d_{1,p}(0, T)$, while the control inputs $u$ belong to a closed convex subset $U$ of the space $L^m_q(0, T)$ (here $1 \le p, q \le + \infty$).
Let us introduce a penalty function for problem \eqref{LinearFixedEndPointProblem}. We will penalise only the terminal constraint $x(T) = x_T$. Define $X = W_{1, p}^d(0, T) \times L_q^m(0, T)$, $M = \{ (x, u) \in X \mid x(T) = x_T \}$, and \begin{equation} \label{AddConstr_LinearCase}
A = \Big\{ (x, u) \in X \Bigm| x(0) = x_0, \: u \in U, \:
\dot{x}(t) = A(t) x(t) + B(t) u(t) \text{ for a.e. } t \in [0, T] \Big\}. \end{equation} Then problem \eqref{LinearFixedEndPointProblem} can be rewritten as the problem of minimising $\mathcal{I}(x, u)$
subject to $(x, u) \in M \cap A$. Define $\varphi(x, u) = |x(T) - x_T|$. Then $M = \{ (x, u) \in X \mid \varphi(x, u) = 0 \}$, and one can consider the penalised problem of minimising the penalty function $\Phi_{\lambda}(x, u) = \mathcal{I}(x, u) + \lambda \varphi(x, u)$ subject to $(x, u) \in A$. Note that this is a free-endpoint problem of the form: \begin{equation} \label{FreeEndPointProblem_withPenalty} \begin{split}
{}&\min_{(x, u) \in X} \Phi_{\lambda}(x, u) = \int_0^T \theta(x(t), u(t), t) \, dt + \lambda \big| x(T) - x_T \big| \\
{}&\text{subject to } \dot{x}(t) = A(t) x(t) + B(t) u(t), \quad
t \in [0, T], \quad u \in U, \quad x(0) = x_0. \end{split} \end{equation} Our aim is to show that under some natural assumptions the penalty function $\Phi_{\lambda}$ is completely exact, i.e. that free-endpoint problem \eqref{FreeEndPointProblem_withPenalty} is equivalent to fixed-endpoint problem \eqref{LinearFixedEndPointProblem} for any sufficiently large $\lambda \ge 0$.
Let $\mathcal{I}^*$ be the optimal value of problem \eqref{LinearFixedEndPointProblem}. Recall that $S_{\lambda}(c) = \{ (x, u) \in A \mid \Phi_{\lambda}(x, u) < c \}$ for any $c \in \mathbb{R}$ and $\Omega_{\delta} = \{ (x, u) \in A \mid \varphi(x, u) < \delta \}$ for any $\delta > 0$. In our case the set $\Omega_{\delta}$ consists of all those $(x, u) \in W_{1, p}^d(0, T) \times L_q^m(0, T)$ for which $u \in U$, \begin{equation} \label{LinearTimeVaryingSystems}
\dot{x}(t) = A(t) x(t) + B(t) u(t) \quad \text{for a.e. } t \in [0, T], \quad x(0) = x_0, \end{equation}
and $|x(T) - x_T| < \delta$. Finally, denote by $\mathcal{R}(x_0, T)$ the set that is reachable in time $T$, i.e. the set of all those $\xi \in \mathbb{R}^d$ for which there exists $u \in U$ such that $x(T) = \xi$, where $x(\cdot)$ is a solution of \eqref{LinearTimeVaryingSystems}. Observe that the reachable set $\mathcal{R}(x_0, T)$ is convex due to the convexity of the set $U$ and the linearity of the system. Finally, recall that the \textit{relative interior} of a convex set $C \subset \mathbb{R}^d$, denoted $\relint C$, is the interior of $C$ relative to the affine hull of $C$.
The following theorem on the complete exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{LinearFixedEndPointProblem} can be proved with the use of state-transition matrix for \eqref{LinearTimeVaryingSystems}. Here, we present a different and more instructive (although slightly longer) proof of this result, since it contains several important ideas related to penalty functions for optimal control problems, which will be utilised in the following sections.
\begin{theorem} \label{Theorem_FixedEndPointProblem_Linear} Let $q \ge p$, and the following assumptions be valid: \begin{enumerate} \item{$A(\cdot) \in L_{\infty}^{d \times d}(0, T)$ and $B(\cdot) \in L_{\infty}^{d \times m}(0, T)$; \label{Assumpt_LTI_BoundedCoef}}
\item{the function $\theta = \theta(x, u, t)$ is continuous, differentiable in $x$ and $u$, and the functions $\nabla_x \theta$ and $\nabla_u \theta$ are continuous; }
\item{either $q = + \infty$ or the functions $\theta$ and $\nabla_x \theta$ satisfy the growth condition of order $(q, 1)$, while the function $\nabla_u \theta$ satisfies the growth condition of order $(q - 1, q')$; \label{Assumpt_LTI_DerivGrowthCond}}
\item{there exists a globally optimal solution of problem \eqref{LinearFixedEndPointProblem}, and $x_T$ belongs to the relative interior of the reachable set $\mathcal{R}(x_0, T)$ (in the case $U = L_q^m(0, T)$ this assumption holds true automatically); \label{Assumpt_LTI_EndpointRelInt} }
\item{there exist $\lambda_0 > 0$, $c > \mathcal{I}^*$ and $\delta > 0$ such that the set $S_{\lambda_0}(c) \cap \Omega_{\delta}$ is bounded in $W^d_{1, p}(0, T) \times L_q^m(0, T)$, and the function $\Phi_{\lambda_0}(x, u)$ is bounded below on $A$. \label{Assumpt_LTI_SublevelBounded}} \end{enumerate} Then there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ the penalty function $\Phi_{\lambda}$ for problem \eqref{LinearFixedEndPointProblem} is completely exact on $S_{\lambda}(c)$. \end{theorem}
\begin{proof} Our aim is to employ Theorem~\ref{Theorem_CompleteExactness}. To this end, note that from the essential boundedness of $A(\cdot)$ and $B(\cdot)$, and the fact that $p \le q$ it follows that the function $(x, u) \mapsto \dot{x}(\cdot) - A(\cdot) x(\cdot) - B(\cdot) u(\cdot)$ continuously maps $X$ to $L^d_p(0, T)$. Hence taking into account \eqref{SobolevImbedding} and the fact that $U$ is closed by our assumptions one obtains that the set $A$ is closed (see \eqref{AddConstr_LinearCase}). By applying \eqref{SobolevImbedding} one gets that $$
\big| \varphi(x, u) - \varphi(y, v) \big| = \big| |x(T) - x_T| - |y(T) - x_T| \big|
\le |x(T) - y(T)| \le C_p \| x - y \|_{1, p}
\quad \forall (x, u), (y, v) \in X, $$ i.e. the function $\varphi$ is continuous. By \cite[Theorem~7.3]{FonsecaLeoni} the growth condition on the function $\theta$ guarantees that the functional $\mathcal{I}(x, u)$ is correctly defined and finite for any $(x, u) \in X$, while by \cite[Proposition~4]{DolgopolikFominyh} the growth conditions on $\nabla_x \theta$ and $\nabla_u \theta$ ensure that the functional $\mathcal{I}(x, u)$ is Lipschitz continuous on any bounded subset of $X$. Hence, in particular, it is Lipschitz continuous on any bounded open set containing the set $S_{\lambda_0}(c) \cap \Omega_{\delta}$ (such \textit{bounded} open set exists, since $S_{\lambda_0}(c) \cap \Omega_{\delta}$ is bounded by our assumption). Thus, by Theorem~\ref{Theorem_CompleteExactness} it remains to check that there exists $a > 0$ such that $\varphi^{\downarrow}_A(x, u) \le - a$ for any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$.
Let $(x, u) \in S_{\lambda_0}(c) \cap \Omega_{\delta}$ be such that $\varphi(x, u) > 0$, i.e. $x(T) \ne x_T$. Choose any $(\widehat{x}, \widehat{u}) \in \Omega = M \cap A$ (recall that $\Omega$ is not empty, since by our assumption problem \eqref{LinearFixedEndPointProblem} has a globally optimal solution). By definition $\widehat{x}(T) = x_T$. Put $\Delta x = ( \widehat{x} - x ) / \sigma$ and $\Delta u = ( \widehat{u} - u ) / \sigma$, where
$\sigma = \| \widehat{x} - x \|_{1, p} + \| \widehat{u} - u \|_q > 0$.
Then $\| (\Delta x, \Delta u) \|_X = \| \Delta x \|_{1, p} + \| \Delta u \|_q = 1$. From the linearity of the system and the convexity of the set $U$ it follows that for any $\alpha \in [0, \sigma]$ one has $(x + \alpha \Delta x, u + \alpha \Delta x) \in A$. Furthermore, note that $(x + \alpha \Delta x)(T) = x(T) + \alpha \sigma^{-1} (x_T - x(T))$. Hence \begin{align*}
\varphi^{\downarrow}_A(x, u) &\le \lim_{\alpha \to +0}
\frac{\varphi(x + \alpha \Delta x, u + \alpha \Delta u) - \varphi(x, u)}{\alpha \| (\Delta x, \Delta u) \|_X} \\
&= \lim_{\alpha \to +0} \frac{(1 - \alpha \sigma^{-1}) |x(T) - x_T| - |x(T) - x_T|}{\alpha}
= - \frac{1}{\sigma} |x(T) - x_T|. \end{align*} Therefore, it remains to check that there exists $C > 0$ such that for any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ one can find $(\widehat{x}, \widehat{u}) \in \Omega$ satisfying the inequality \begin{equation} \label{ErrorBound_TerminalConstraint}
\| x - \widehat{x} \|_{1, p} + \| u - \widehat{u} \|_q \le C |x(T) - x_T|. \end{equation} Then $\varphi^{\downarrow}_A(x, u) \le - 1 / C$ for any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$, and the proof is complete.
Firstly, let us check that \eqref{ErrorBound_TerminalConstraint} follows from a seemingly weaker inequality, which is easier to prove. Let $(x_1, u_1) \in A$ and $(x_2, u_2) \in A$. Then for any $t \in [0, T]$ one has $x_1(t) - x_2(t) = \int_0^t ( A(\tau) (x_1(\tau) - x_2(\tau)) + B(\tau) (u_1(\tau) - u_2(\tau)) ) d \tau$. By applying H\"{o}lder's inequality one gets that for any $t \in [0, T]$ $$
|x_1(t) - x_2(t)| \le \| B(\cdot) \|_{\infty} T^{1/q'} \| u_1 - u_2 \|_q
+ \| A(\cdot) \|_{\infty} \int_0^t |x_1(\tau) - x_2(\tau)| \, d \tau. $$ Hence by the Gr\"{o}nwall-Bellman inequality one obtains that
$\| x_1 - x_2 \|_{\infty} \le L_0 \| u_1 - u_2 \|_q$ for all $(x_1, u_1) \in A$ and $(x_2, u_2) \in A$, where
$L_0 = \| B(\cdot) \|_{\infty} T^{1/q'} ( 1 + T \| A(\cdot) \|_{\infty} e^{T \| A(\cdot) \|_{\infty}} )$. Consequently, by applying the equality $$
\dot{x}_1(t) - \dot{x}_2(t) = A(t) \Big( x_1(t) - x_2(t) \big) + B(t) \big( u_1(t) - u_2(t) \Big), $$ H\"{o}lder's inequality, and the fact that $q \ge p$ one obtains $$
\| \dot{x}_1 - \dot{x}_2 \|_p \le T^{1/p} \| A(\cdot) \|_{\infty} \| x_1 - x_2 \|_{\infty}
+ \| B(\cdot) \|_{\infty} T^{\frac{q - p}{qp}} \| u_1 - u_2 \|_q
\le \Big( T^{1/p} \| A(\cdot) \|_{\infty} L_0 + T^{\frac{q - p}{qp}} B(\cdot) \|_{\infty} \Big) \| u_1 - u_2 \|_q, $$
i.e. $\| x_1 - x_2 \|_{1, p} \le L \| u_1 - u_2 \|_q$ for some $L > 0$ depending only on $A(\cdot)$, $B(\cdot)$, $T$, $p$, and $q$. Therefore, it is sufficient to check that there exists $C > 0$ such that for any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ one can find $(\widehat{x}, \widehat{u}) \in \Omega$ satisfying the inequality \begin{equation} \label{LinearSystem_SensitivityCond}
\| u - \widehat{u} \|_q \le C |x(T) - x_T| \end{equation} (cf.~\eqref{ErrorBound_TerminalConstraint}). Let us prove inequality \eqref{LinearSystem_SensitivityCond} with the use of Robinson's theorem (Theorem~\ref{Theorem_Robinson_Ursescu}).
Introduce the linear operator $\mathcal{T} \colon L_q^m(0, T) \to \mathbb{R}^d$, $\mathcal{T} v = h(T)$, where $h \in W_{1, p}^d(0, T)$ is a solution of \begin{equation} \label{TimeVaryingSystem_FromZero}
\dot{h}(t) = A(t) h(t) + B(t) v(t), \quad h(0) = 0. \end{equation} For any $v \in L_q^m(0, T)$ a unique absolutely continuous solution $h$ of this equation defined on $[0, T]$ exists by
\cite[Theorem~1.1.3]{Filippov}. By applying H\"{o}lder's inequality and the fact that $q \ge p$ one gets that $\| \dot{h} \|_p \le T^{1/p} \| A(\cdot) \|_{\infty} \| h \|_{\infty} + \| B(\cdot) \|_{\infty} T^{(q - p)/qp}
\| v \|_q$, which implies that $h \in W^d_{1, p}(0, T)$, and the linear operator $\mathcal{T}$ is correctly defined. Let us check that it is bounded. Indeed, fix any $v \in L_q^m(0, T)$ and the corresponding solution $h$ of \eqref{TimeVaryingSystem_FromZero}. For all $t \in [0, T]$ one has $$
|h(t)| = \left| \int_0^t \big( A(\tau) h(\tau) + B(\tau) v(\tau) \big) \, d \tau \right|
\le \| B(\cdot) \|_{\infty} T^{1/q'} \| v \|_q + \| A(\cdot) \| \int_0^t |h(\tau)| \, d \tau, $$
which with the use of the Gr\"{o}nwall-Bellman inequality implies that $|h(T)| \le L_0 \| v \|_q$, i.e. the operator $\mathcal{T}$ is bounded.
Fix any feasible point $(x_*, u_*) \in \Omega$ of problem~\eqref{LinearFixedEndPointProblem}, i.e. $\dot{x}_*(t) = A(t) x_*(t) + B(t) u_*(t)$ for a.e. $t \in [0, T]$, $u_* \in U$, $x(0) = x_0$, and $x(T) = x_T$. Observe that for any $(x, u) \in A$ one has $x(0) - x_*(0) = 0$ and $\dot{x}(t) - \dot{x}_*(t) = A(t) \big( x(t) - x_*(t) \big) + B(t) \big( u(t) - u_*(t) \big)$ for a.e. $t \in [0, T]$, which implies that $x(T) = (x(T) - x_T) + x_T = \mathcal{T}(u - u_*) + x_T$ (see \eqref{AddConstr_LinearCase} and \eqref{TimeVaryingSystem_FromZero}). Consequently, one has \begin{equation} \label{RechableSet_AsShiftedImage}
\mathcal{R}(x_0, T) = x_T + \mathcal{T}(U - u_*). \end{equation} Define $X_0 = \cl \linhull(U - u_*)$ and $Y_0 = \linhull \mathcal{T}(U - u_*)$. Note that $Y_0$ is closed as a subspace of the finite dimensional space $\mathbb{R}^d$. Moreover, $\mathcal{T}(X_0) = Y_0$. Indeed, it is clear that the operator $\mathcal{T}$ maps $\linhull(U - u_*)$ onto $\linhull \mathcal{T}(U - u_*)$. If $u \in X_0$, then there exists a sequence $\{ u_n \} \subset \linhull(U - u_*)$ converging to $u$. From the boundedness of the operator $\mathcal{T}$ it follows that $\mathcal{T}(u_n) \to \mathcal{T}(u)$ as $n \to \infty$, which implies that $\mathcal{T}(u) \in Y_0$ due to the closedness of $Y_0$ and the fact that $\{ \mathcal{T}(u_n) \} \subset Y_0$ by definition. Thus, $\mathcal{T}(X_0) = Y_0$.
Finally, introduce the operator $\mathcal{T}_0 \colon X_0 \to Y_0$, $\mathcal{T}_0(u) = \mathcal{T}(u)$ for all $u \in X_0$. Clearly, $\mathcal{T}_0$ is a bounded linear operator between Banach spaces. Recall that by our assumption $x_T \in \relint \mathcal{R}(x_0, T)$. By the definition of relative interior it means that $0 \in \interior \mathcal{T}_0(U - u_*)$ (see~\eqref{RechableSet_AsShiftedImage}). Therefore by Robinson's theorem (Theorem~\ref{Theorem_Robinson_Ursescu} with $C = U - u_*$, $x^* = 0$, and $y = 0$) there exists $\kappa > 0$ \begin{equation} \label{Robinson_Ursescu_LTVS}
\dist\big( u - u_*, \mathcal{T}_0^{-1}(0) \cap (U - u_*) \big) \le
\kappa \big( 1 + \| u - u_* \|_q \big) \big| \mathcal{T}_0(u - u_*) \big|
\quad \forall u \in U. \end{equation} With the use of this inequality we can easily prove \eqref{LinearSystem_SensitivityCond}. Indeed, fix any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$. Note that $\mathcal{T}_0(u - u_*) = x(T) - x_T \ne 0$, since $(x, u) \notin \Omega$. By inequality \eqref{Robinson_Ursescu_LTVS} there exists $v \in U - u_*$ such that $\mathcal{T}_0(v) = 0$ and \begin{equation} \label{Robinson_Ursescu_LTVS_mod}
\big\| u - u_* - v \big\|_q \le 2 \kappa \big( 1 + \| u - u_* \|_q \big) |x(T) - x_T|. \end{equation} Define $\widehat{u} = u_* + v$, and let $\widehat{x}$ be the corresponding solution of original system \eqref{LinearTimeVaryingSystems}. Then $\widehat{x}(T) = x_T$, since $(x_*, u_*) \in \Omega$ by definition and $\mathcal{T}(v) = 0$, which yields $(\widehat{x}, \widehat{u}) \in \Omega$. Furthermore, by inequality~\eqref{Robinson_Ursescu_LTVS_mod} one has $$
\| u - \widehat{u} \|_q \le 2 \kappa \big( 1 + \| u - u_* \|_q \big) |x(T) - \widehat{x}(T)|. $$
By our assumption the set $S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ is bounded, which implies that there exists $C > 0$ such that $2 \kappa (1 + \| u - u_* \|_q) \le C$ for all $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$. Thus, for all such $(x, u)$ there exists $(\widehat{x}, \widehat{u}) \in \Omega$ satisfying the inequality
$\| u - \widehat{u} \|_q \le C |x(T) - \widehat{x}(T)|$, i.e. \eqref{LinearSystem_SensitivityCond} holds true, and the proof is complete. \end{proof}
\begin{remark} Let $1 < p \le q < + \infty$, and the function $\theta(x, u, t)$ be convex in $u$ for all $x \in \mathbb{R}^d$ and $t \in [0, T]$. Then under assumptions \ref{Assumpt_LTI_BoundedCoef}--\ref{Assumpt_LTI_DerivGrowthCond} and \ref{Assumpt_LTI_SublevelBounded} of Theorem~\ref{Theorem_FixedEndPointProblem_Linear} a globally optimal solution of problem \eqref{LinearFixedEndPointProblem} exists iff $x_T \in \mathcal{R}(x_0, T)$. Indeed, if $x_T \in \mathcal{R}(x_0, T)$, then the sublevel set $\{ x \in \Omega \mid \mathcal{I}(x, u) < c \} \subset S_{\lambda_0}(c) \cap \Omega_{\delta}$ is nonempty and bounded due to the fact that $c > \mathcal{I}^*$. Therefore there exists a bounded sequence $\{ (x_n, u_n) \} \subset \Omega$ such that $\mathcal{I}(x_n, u_n) \to \mathcal{I}^*$ as $n \to \infty$. From the fact that the spaces $W^d_{1, p}(0, T)$ and $L^m_q(0, T)$ are reflexive, provided $1 < p, q < + \infty$, it follows that there exists a subsequence $\{ (x_{n_k}, u_{n_k}) \}$ weakly converging to some $(x^*, u^*) \in X$. Since the imbedding of $W^d_{1, p}(0, T)$ into $(C[0, T])^d$ is compact (see, e.g. \cite[Theorem~6.2]{Adams}), without loss of generality one can suppose that $x_{n_k}$ converges to $x^*$ uniformly on $[0, T]$. Utilising this result, as well as the facts that the system is linear and the set $U$ of admissible control inputs is convex and closed, one can readily verify that $(x^*, u^*) \in \Omega$. Furthermore, the convexity of the function $u \mapsto \theta(x, u, t)$ ensures that $\mathcal{I}(x^*, u^*) \le \lim_{k \to \infty} \mathcal{I}(x_{n_k}, u_{n_k}) = \mathcal{I}^*$ (see~\cite[Section~7.3.2]{FonsecaLeoni} and \cite{Ioffe77}), which implies that $(x^*, u^*)$ is a globally optimal solution of \eqref{LinearFixedEndPointProblem}. \end{remark}
\begin{remark} Let us note that assumption~\ref{Assumpt_LTI_SublevelBounded} of Theorem~\ref{Theorem_FixedEndPointProblem_Linear} is satisfied, in particular, if the set $U$ is bounded or there exist $C > 0$ and $\omega \in L^1(0, T)$ such that
$\theta(x, u, t) \ge C |u|^q + \omega(t)$ for all $x \in \mathbb{R}^d$, $u \in \mathbb{R}^m$, and a.e. $t \in (0, T)$. Indeed, with the use of this inequality one can check that for any $c > \mathcal{I}^*$ there exists $K > 0$ such that for all $(x, u) \in S_{\lambda}(c)$ one has $\| u \|_q \le K$ (if $U$ is bounded, then this inequality is satisfied by definition). Then by applying the Gr\"{o}nwall-Bellman inequality one can easily check that the set $S_{\lambda}(c)$ is bounded for all $c > \mathcal{I}^*$, provided $q \ge p$. Moreover, with the use of the boundedness of the set $S_{\lambda}(c)$ and the growth condition of order $(q, 1)$ on the function $\theta$ one can easily check that the penalty function $\Phi_{\lambda_0}$ is bounded below on $A$ for all $\lambda_0 \ge 0$. \end{remark}
The following example demonstrates that in the general case Theorem~\ref{Theorem_FixedEndPointProblem_Linear} is no longer true, if the assumption that $x_T$ belongs to the relative interior of the reachable set $\mathcal{R}(x_0, T)$ is dropped.
\begin{example} \label{Example_EndPoint_NotRelInt} Let $d = m = 2$, $p = q = 2$, and $T = 1$. Define $U = \{ u \in L_2^2(0, 1) \mid u(t) \in Q \text{ for a.e. } t \in (0, 1) \}$, where $Q = \{ u = (u^1, u^2)^T \in \mathbb{R}^2 \mid u^1 + u^2 \le 1, \: (u^1 - u^2)^2 \le u^1 + u^2 \}$. Note that the set $U$ of admissible control inputs is closed and convex, since, as is easy to see, $Q$ is a closed convex set. Consider the following optimal control problem: \begin{equation} \label{Problem_NoRint_Endpoint}
\min \: \mathcal{I}(x, u) = \int_0^1 \big( u^2(t) - u^1(t) \big) \, dt \quad
\text{s.t.} \quad
\begin{cases}
\dot{x}^1 = 0 \\
\dot{x}^2 = u^1 + u^2
\end{cases}
\quad t \in [0, 1], \quad u \in U, \quad x(0) = x(1) = 0. \end{equation} Let us show at first that in this case $\mathcal{R}(x_0, T) = \{ x \in \mathbb{R}^2 \mid x^1 = 0, \: x^2 \in [0, 1] \}$ (note that $x_0 = 0$ and $T = 1$), which implies that $x_T = 0 \notin \relint \mathcal{R}(x_0, T) = \{ x \in \mathbb{R}^2 \mid x^1 = 0, \: x^2 \in (0, 1) \}$. Indeed, by the definitions of the sets $U$ and $Q$ for any $u \in U$ one has $$
x^2(1) = \int_0^1 (u^1(t) + u^2(t)) \, dt \le \int_0^1 dt = 1, \quad
x^2(1) = \int_0^1 (u^1(t) + u^2(t)) dt \ge \int_0^1 (u^1(t) - u^2(t))^2 \, dt \ge 0, $$ i.e. $x^2(1) \in [0, 1]$. Furthermore, for any $s \in [0, 1]$ one has $x^2(1) = s$ for $u_s^1(t) \equiv (s + \sqrt{s}) / 2$ and $u_s^2(t) \equiv (s - \sqrt{s}) / 2$ (note that $u_s \in U$). Thus, $\mathcal{R}(x_0, T) = \{ 0 \} \times [0, 1]$, and $x_T \notin \relint \mathcal{R}(x_0, T)$. Note that all other assumptions of Theorem~\ref{Theorem_FixedEndPointProblem_Linear} are satisfied.
Let us check that the penalty function $\Phi_{\lambda}(x, u) = \mathcal{I}(x, u) + \lambda |x(1)|$ for problem \eqref{Problem_NoRint_Endpoint} is not globally exact. Firstly, note that the only feasible point of problem \eqref{Problem_NoRint_Endpoint} is $(x^*, u^*)$ with $x^*(t) \equiv 0$ and $u^*(t) = 0$ for a.e. $t \in [0, 1]$. Indeed, fix any feasible point $(x, u) \in \Omega$. From the terminal constraint $x(1) = 0$ and the definition of $Q$ it follows that $$
0 = x^2(1) = \int_0^1 (u^1(t) + u^2(t)) \, dt \ge \int_0^1 (u^1(t) - u^2(t))^2 \, dt \ge 0, $$ which implies that $u^1(t) = u^2(t)$ for a.e. $t \in [0, 1]$. Furthermore, by the definition of $Q$ one has $u^1(t) + u^2(t) \ge 0$ for a.e. $t \in (0, T)$, which yields $u^1(t) = - u^2(t)$ for a.e. $t \in [0, 1]$. Therefore $u(t) = 0$ for a.e. $t \in [0, 1]$, $x(t) \equiv 0$, and $\Omega = \{ (x^*, u^*) \}$.
Arguing by reductio ad absurdum, suppose that the penalty function
$\Phi_{\lambda}(x, u) = \mathcal{I}(x, u) + \lambda |x(1)|$ for problem \eqref{Problem_NoRint_Endpoint} is globally exact. Then there exists $\lambda > 0$ such that $(x^*, u^*)$ is a globally optimal solution of the problem $$
\min \: \Phi_{\lambda}(x, u) = \int_0^1 \big( u^2(t) - u^1(t) \big) \, dt + \lambda |x(1)| \quad
\text{s.t.} \quad
\begin{cases}
\dot{x}^1 = 0 \\
\dot{x}^2 = u^1 + u^2
\end{cases}
\quad t \in [0, 1], \quad u \in U, \quad x(0) = 0. $$ Fix any $s \in (0, 1)$, and define $u = u_s \in U$ (recall that $u_s^1(t) \equiv (s + \sqrt{s}) / 2$ and $u_s^2(t) \equiv (s - \sqrt{s}) / 2$). For the corresponding solution $x_s(t)$ one has $x_s^2(1) = s$, and $$
\Phi_{\lambda}(x_s, u_s) = - \sqrt{s} + \lambda s \ge 0 = \Phi_{\lambda}(x^*, u^*), $$ which is impossible for any sufficiently small $s \in (0, 1)$. Thus, the penalty function $\Phi_{\lambda}$ is not globally exact. \end{example}
\begin{remark} It should be noted that the assumption $x_T \in \relint \mathcal{R}(x_0, T)$ is \textit{not} necessary for the exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{LinearFixedEndPointProblem}. For instance, the interested reader can check that if in Example~\ref{Example_EndPoint_NotRelInt} the system has the form $\dot{x}^1 = u^1$ and $\dot{x}^2 = u^2$, then the penalty function
$\Phi_{\lambda}(x, u) = \int_0^1 (u^2(t) - u^1(t)) \, dt + \lambda |x(1)| = x^2(1) - x^1(1) + \lambda |x(1)|$ is completely exact, despite the fact that in this case $\mathcal{R}(x_0, T) = Q$ and $x_T = 0 \notin \relint \mathcal{R}(x_0, T)$. We pose an interesting open problem to find \textit{necessary and sufficient} conditions for the complete exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{LinearFixedEndPointProblem} (at least in the time-invariant case). In particular, it seems that in the case when $U = \{ u \in L_q^m(0, T) \mid u(t) \in Q \text{ for a.e. } t \in [0, T] \}$ and $Q$ is a convex polytope, the assumption $x_T \in \relint \mathcal{R}(x_0, T)$ in Theorem~\ref{Theorem_FixedEndPointProblem_Linear} can be dropped. \end{remark}
Let us finally note that in the case when the set $U$ of admissible control inputs is bounded, one can prove the complete exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{LinearFixedEndPointProblem} on $A$. In other words, one can prove that free-endpoint problem \eqref{FreeEndPointProblem_withPenalty} is completely equivalent to fixed-endpoint problem \eqref{LinearFixedEndPointProblem} in the sense that these problems have the same optimal value, the same globally/locally optimal solutions, and the same inf-stationary points.
\begin{theorem} \label{Theorem_FixedEndPointProblem_Linear_Global} Let $q \ge p$, assumptions \ref{Assumpt_LTI_BoundedCoef}--\ref{Assumpt_LTI_EndpointRelInt} of Theorem~\ref{Theorem_FixedEndPointProblem_Linear} be valid, and suppose that the set $U$ is bounded. Then the penalty function $\Phi_{\lambda}$ for problem \eqref{LinearFixedEndPointProblem} is completely exact on $A$. \end{theorem}
\begin{proof}
By our assumption there exists $K > 0$ such that $\| u \|_q \le K$ for any $u \in U$. Choose any $(x, u) \in A$. Then by definition $x(t) = x_0 + \int_0^t ( A(\tau) x(\tau) + B(\tau) u(\tau) ) \, d \tau$ for all $t \in [0, T]$, which by H\"{o}lder's inequality implies that $$
|x(t)| \le |x_0| + \| B(\cdot) \|_{\infty} T^{1/q'} \| u \|_q
+ \| A(\cdot) \|_{\infty} \int_0^t |x(\tau)| \, d \tau. $$
Consequently, by applying the Gr\"{o}nwall-Bellman inequality and the fact that $\| u \|_q \le K$ one obtains that
$\| x \|_{\infty} \le C$ for some $C > 0$ depending only on $K$, $A(\cdot)$, $B(\cdot)$, $T$, and $q$. Hence by H\"{o}lder's inequality and the definition of the set $A$ (see~\eqref{AddConstr_LinearCase}) one obtains that $$
\| \dot{x} \|_p = \big\| A(\cdot) x(\cdot) + B(\cdot) u(\cdot) \big\|_p
\le T^{1/p} \| A(\cdot) \|_{\infty} C + \| B(\cdot) \|_{\infty} T^{\frac{q - p}{qp}} K
\quad \forall (x, u) \in A, $$ i.e. the set $A$ is bounded in $X$ and in $L^d_{\infty}(0, T) \times L^m_q(0, T)$. Therefore, both $\mathcal{I}$ and $\Phi_{\lambda}$, for any $\lambda \ge 0$, are bounded below on $A$ due to the fact that the function $\theta$ satisfies the growth conditions of order $(q, 1)$ (see assumption~\ref{Assumpt_LTI_DerivGrowthCond} of Theorem~\ref{Theorem_FixedEndPointProblem_Linear}). Now, arguing in the same way as in the proof of Theorem~\ref{Theorem_FixedEndPointProblem_Linear}, but replacing $S_{\lambda_0}(c) \cap \Omega_{\delta}$ with $A$ and utilising Theorem~\ref{THEOREM_COMPLETEEXACTNESS_GLOBAL} instead of Theorem~\ref{Theorem_CompleteExactness}, one obtains the desired result. \end{proof}
\subsection{Linear Evolution Equations} \label{SubSec_EvolEq_TerminalConstr}
Let us demonstrate that Theorems~\ref{Theorem_FixedEndPointProblem_Linear} and \ref{Theorem_FixedEndPointProblem_Linear_Global} can be easily extended to the case of optimal control problems for linear evolution equations in Hilbert spaces. In this section we use standard definitions and results on control problems for infinite dimensional systems that can be found, e.g. in monograph\cite{TucsnakWeiss}.
Let $\mathscr{H}$ and $\mathscr{U}$ be complex Hilbert spaces, $\mathbb{T}$ be a strongly continuous semigroup on $\mathscr{H}$ with generator $\mathcal{A} \colon \mathcal{D}(\mathcal{A}) \to \mathscr{H}$, and let $\mathcal{B}$ be an admissible control operator for $\mathbb{T}$ (see~\cite[Def.~4.2.1]{TucsnakWeiss}). Consider the following fixed-endpoint optimal control problem: \begin{equation} \label{EvolEqFixedEndPointProblem} \begin{split}
{}&\min_{(x, u)} \mathcal{I}(x, u) = \int_0^T \theta(x(t), u(t), t) \, dt \\
{}&\text{subject to } \dot{x}(t) = \mathcal{A} x(t) + \mathcal{B} u(t), \quad t \in [0, T], \quad u \in U, \quad
x(0) = x_0, \quad x(T) = x_T. \end{split} \end{equation} Here $\theta \colon \mathscr{H} \times \mathscr{U} \times [0, T] \to \mathbb{R}$ is a given function, $T > 0$ and $x_0, x_T \in \mathscr{H}$ are fixed, and $U$ is a closed convex subset of the space $L^2((0, T); \mathscr{U})$ consisting of all those measurable functions $u \colon (0, T) \to \mathscr{U}$
for which $\| u \|_{L^2((0, T); \mathscr{U})} = \int_0^T \| u(t) \|_{\mathscr{U}}^2 dt < + \infty$.
Let us introduce a penalty function for problem \eqref{EvolEqFixedEndPointProblem}. As in the previous section, we only penalise the terminal constraint $x(T) = x_T$. For any $t \ge 0$ let $F_t u = \int_0^t \mathbb{T}_{t - \sigma} \mathcal{B} u(\sigma) \, d \sigma$ be the input map corresponding to $(\mathcal{A}, \mathcal{B})$. By \cite[Proposition~4.2.2]{TucsnakWeiss}, $F_t$ is a bounded linear operator from $L^2((0, T); \mathscr{U})$ to $\mathscr{H}$. Furthermore, by applying \cite[Proposition~4.2.5]{TucsnakWeiss} one obtains that for any $u \in L^2((0, T); \mathscr{U})$ the initial value problem \begin{equation} \label{LinearEvolEq}
\dot{x}(t) = \mathcal{A} x(t) + \mathcal{B} u(t), \quad x(0) = x_0 \end{equation} has a unique solution $x \in C([0, T]; \mathscr{H})$ given by \begin{equation} \label{SolutionViaSemiGroup}
x(t) = \mathbb{T}_t x_0 + F_t u \quad \forall t \in [0, T]. \end{equation} Define $X = C([0, T]; \mathscr{H}) \times L^2((0, T); \mathscr{U})$, $M = \{ (x, u) \in X \mid x(T) = x_T \}$, and $$
A = \Big\{ (x, u) \in X \Bigm| x(0) = x_0, \: u \in U, \: \text{and $\eqref{SolutionViaSemiGroup}$ holds true} \Big\}. $$ Then problem \eqref{EvolEqFixedEndPointProblem} can be rewritten as the problem of minimising $\mathcal{I}(x, u)$
subject to $(x, u) \in M \cap A$. Introduce the penalty term $\varphi(x, u) = \| x(T) - x_T \|_{\mathscr{H}}$. Then $M = \{ (x, u) \in X \mid \varphi(x, u) = 0 \}$, and one can consider the penalised problem of minimising the penalty function $\Phi_{\lambda}(x, u) = \mathcal{I}(x, u) + \lambda \varphi(x, u)$ subject to $(x, u) \in A$, which is a free-endpoint problem of the form: \begin{equation} \label{EvolEqFreeEndPointProblem} \begin{split}
{}&\min_{(x, u)} \Phi_{\lambda}(x, u) = \mathcal{I}(x, u) + \lambda \varphi(x, u)
= \int_0^T \theta(x(t), u(t), t) \, dt + \lambda \| x(T) - x_T \|_{\mathscr{H}} \\
{}&\text{subject to } \dot{x}(t) = \mathcal{A} x(t) + \mathcal{B} u(t), \quad t \in [0, T], \quad u \in U, \quad
x(0) = x_0. \end{split} \end{equation} Denote by $\mathcal{R}(x_0, T)$ the set that is reachable in time $T$, i.e. the set of all those $\xi \in \mathscr{H}$ for which there exists $u \in U$ such that $x(T) = \xi$, where $x(\cdot)$ is defined in \eqref{SolutionViaSemiGroup}. Observe that by definition $\mathcal{R}(x_0, T) = F_T(U) + \mathbb{T}_T x_0$, which implies that the reachable set $\mathcal{R}(x_0, T)$ is convex due to the convexity of the set $U$.
Our aim is to show that under a natural assumption on the reachable set $\mathcal{R}(x_0, T)$ the penalty function $\Phi_{\lambda}$ is completely exact, i.e. for any sufficiently large $\lambda \ge 0$ free-endpoint problem \eqref{EvolEqFreeEndPointProblem} is equivalent to fixed-endpoint problem \eqref{EvolEqFixedEndPointProblem}.
In this finite dimensional case we assumed that $x_T \in \relint \mathcal{R}(x_0, T)$. In the infinite dimensional case we will use the same assumption, since to the best of author's knowledge it is the weakest assumption allowing one to utilise Robinson's theorem. However, recall that the relative interior of a convex subset of a finite dimensional space is always nonempty, but this statement is no longer true in infinite dimensional spaces (see~\cite{BorweinLewis92,BorwinGoebel03}). Thus, in the finite dimensional case the condition $x_T \in \relint \mathcal{R}(x_0, T)$ simply restricts the location of $x_T$ in the reachable set, while in the infinite dimensional case it also imposes the assumption ($\relint \mathcal{R}(x_0, T) \ne \emptyset$) on the reachable set itself. For the sake of completeness recall that the relative interior of a convex subset $C$ of a Banach space $Y$, denoted $\relint C$, is the interior of $C$ relative to the \textit{closed} affine hull of $C$.
\begin{theorem} \label{Theorem_Exactness_EvolutionEquations} Let the following assumptions be valid: \begin{enumerate} \item{$\theta$ is continuous, and for any $R > 0$ there exist $C_R > 0$ and an a.e. nonnegative function
$\omega_R \in L^1(0, T)$ such that $| \theta(x, u, t) | \le C_R \| u \|_{\mathscr{U}}^2 + \omega_R(t)$ for all
$x \in \mathscr{H}$, $u \in \mathscr{U}$, and $t \in (0, T)$ such that $\| x \|_{\mathscr{H}} \le R$; \label{CorrectlyDefined_EvolEq_Assumpt} }
\item{either the set $U$ is bounded in $L^2((0, T), \mathscr{U})$ or there exist $C_1 > 0$ and $\omega \in L^1(0, T)$
such that $\theta(x, u, t) \ge C_1 \| u \|_{\mathscr{U}}^2 + \omega(t)$ for all $x \in \mathscr{H}$, $u \in \mathscr{U}$, and $t \in [0, T]$; \label{GrowthCond_EvolEq_Assumpt} }
\item{$\theta$ is differentiable in $x$ and $u$, the functions $\nabla_x \theta$ and $\nabla_u \theta$ are continuous, and for any $R > 0$ there exist $C_R > 0$, and a.e. nonnegative functions $\omega_R \in L^1(0, T)$ and $\eta_R \in L^2(0, T)$ such that \begin{equation} \label{DerivGrowthCond_EvolEq}
\| \nabla_x \theta(x, u, t) \|_{\mathscr{H}} \le C_R \| u \|_{\mathscr{U}}^2 + \omega_R(t), \quad
\| \nabla_u \theta(x, u, t) \|_{\mathscr{U}} \le C_R \| u \|_{\mathscr{U}} + \eta_R(t) \end{equation}
for all $x \in \mathscr{H}$, $u \in \mathscr{U}$, and $t \in (0, T)$ such that $\| x \|_{\mathscr{H}} \le R$; \label{DerivGrowthCond_EvolEq_Assumpt} }
\item{there exists a globally optimal solution of problem \eqref{EvolEqFixedEndPointProblem}, $\relint \mathcal{R}(x_0, T) \ne \emptyset$ and $x_T \in \relint \mathcal{R}(x_0, T)$. \label{EndPointInterior_Assumpt} } \end{enumerate} Then for all $c \in \mathbb{R}$ there exists $\lambda^*(c) \ge 0$ such that for any $\lambda \ge \lambda^*(c)$ the penalty function $\Phi_{\lambda}$ for problem \eqref{EvolEqFixedEndPointProblem} is completely exact on the set $S_{\lambda}(c)$. \end{theorem}
\begin{proof}
Our aim is to apply Theorem~\ref{Theorem_CompleteExactness}. It is easily seen that assumption~\ref{CorrectlyDefined_EvolEq_Assumpt} ensures that the functional $\mathcal{I}(x, u)$ is correctly defined and finite for any $(x, u) \in X$. In turn, from assumption~\ref{GrowthCond_EvolEq_Assumpt} it follows that for any $c \in \mathbb{R}$ and $\lambda \ge 0$ there exists $K > 0$ such that $\| u \|_{L^2((0, T); \mathscr{U})} \le K$ for any $(x, u) \in S_{\lambda}(c)$, and the penalty function $\Phi_{\lambda}$ is bounded below on $A$ for all $\lambda \ge 0$ (if $U$ is bounded, then this fact follows from assumption~\ref{CorrectlyDefined_EvolEq_Assumpt}).
Hence taking into account \eqref{SolutionViaSemiGroup}, and the facts that $\| F_t \| \le \| F_T \|$ for any $t \le T$
(see \cite[formula $(4.2.5)$]{TucsnakWeiss}), and $\| \mathbb{T}_t \| \le M_{\omega} e^{\omega t}$ for all $t \ge 0$ and for some $\omega \in \mathbb{R}$ and $M_{\omega} \ge 1$ by \cite[Proposition~2.1.2]{TucsnakWeiss} one obtains that
$\| x \|_{C([0, T]; \mathscr{H})} \le M_{\omega} \max_{t \in [0, T]} e^{\omega t} \| x_0 \| + \| F_T \| K$, i.e. the set $S_{\lambda}(c)$ is bounded in $X$ for any $\lambda \ge 0$ and $c \in \mathbb{R}$.
Observe that the penalty term $\varphi$ is continuous on $X$, since by the reverse triangle inequality one has $$
|\varphi(x, u) - \varphi(y, v)| = \big| \| x(T) - x_T \|_{\mathscr{H}} - \| y(T) - x_T \|_{\mathscr{H}} \big|
\le \| x(T) - y(T) \|_{\mathscr{H}} \le \| x - y \|_{C([0, T]; \mathscr{H})} $$ for all $(x, u), (y, v) \in X$. Furthermore from \eqref{SolutionViaSemiGroup}, the closedness of the set $U$, and the fact that $F_t$ continuously maps $L^2((0, T); \mathscr{U})$ to $\mathscr{H}$ by \cite[Proposition~4.2.2]{TucsnakWeiss} it follows that the set $A$ is closed.
Let us check that assumption~\ref{DerivGrowthCond_EvolEq_Assumpt} ensures that the functional $\mathcal{I}$ is Lipschitz continuous on any bounded subset of $X$ (in particular, on any bounded open set containing the set $S_{\lambda}(c)$). Indeed, fix any $(x, u) \in X$, $(h, v) \in X$, and $\alpha \in (0, 1]$. By the mean value theorem for a.e. $t \in (0, T)$ there exists $\alpha(t) \in (0, \alpha)$ such that \begin{multline} \label{MeanValue_Func_EvolEq}
\frac{1}{\alpha} \Big( \theta(x(t) + \alpha h(t), u(t) + \alpha v(t), t) - \theta(x(t), u(t), t) \Big) \\
= \langle \nabla_x \theta(x(t) + \alpha(t) h(t), u(t) + \alpha(t) v(t), t), h(t) \rangle
+ \langle \nabla_u \theta(x(t) + \alpha(t) h(t), u(t) + \alpha(t) v(t), t), v(t) \rangle. \end{multline} The right-hand side of this equality converges to $\langle \nabla_x \theta(x(t), u(t), t), h(t) \rangle + \langle \nabla_u \theta(x(t), u(t), t), v(t) \rangle$ as $\alpha \to 0$ for a.e. $t \in (0, T)$ due to the continuity of the gradients $\nabla_x \theta$ and $\nabla_u \theta$. Furthermore, by \eqref{DerivGrowthCond_EvolEq} there exist $C_R > 0$, and a.e. nonnegative functions $\omega_R \in L^1(0, T)$ and $\eta_R \in L^2(0, T)$ such that \begin{align*}
\big| \langle \nabla_x \theta(x(t) + \alpha h(t), u(t) + \alpha v(t), t), h(t) \rangle \big| &\le
\big( 4 C_R ( \| u(t) \|^2_{\mathscr{U}} + \| v(t) \|^2_{\mathscr{U}} ) + \omega_R(t) \big)
\| h \|_{C([0, T]; \mathscr{H})} \\
\big| \langle \nabla_u \theta(x(t) + \alpha h(t), u(t) + \alpha v(t), t), v(t) \rangle \big| &\le
\big( C_R (\| u(t) \|_{\mathscr{U}} + \| v(t) \|_{\mathscr{U}}) + \eta_R(t) \big) \| v(t) \|_{\mathscr{U}} \end{align*} for a.e. $t \in (0, T)$ and all $\alpha \in [0, 1]$. Note that the right-hand sides of these inequalities belong to $L^1(0, T)$ and do not depend on $\alpha$. Therefore, integrating \eqref{MeanValue_Func_EvolEq} from $0$ to $T$ and passing to the limit with the use of Lebesgue's dominated convergence theorem one obtains that the functional $\mathcal{I}$ is G\^{a}teaux differentiable at every point $(x, u) \in X$, and its G\^{a}teaux derivative has the form $$
\mathcal{I}'(x, u)[h, v] = \int_0^T \Big( \langle \nabla_x \theta(x(t), u(t), t), h(t) \rangle +
\langle \nabla_u \theta(x(t), u(t), t), v(t) \rangle \Big) \, dt. $$ Hence and from \eqref{DerivGrowthCond_EvolEq} it follows that for any $R > 0$ and $(x, u) \in X$ such that
$\| x \|_{C([0, T]; \mathscr{H})} \le R$ there exist $C_R > 0$, and a.e. nonnegative functions $\omega_R \in L^1(0, T)$ and $\eta_R \in L^2(0, T)$ such that $$
\big\| \mathcal{I}'(x, u) \big\| \le C_R \| u \|^2_{L^2((0, T); \mathscr{U})} + \| \omega_R \|_1
+ C_R \| u \|_{L^2((0, T); \mathscr{U})} + \| \eta_R \|_2. $$ Therefore, the G\^{a}teaux derivative of $\mathcal{I}$ is bounded on bounded subsets of the space $X$, which, as is well-known and easy to check, implies that the functional $\mathcal{I}$ is Lipschitz continuous on bounded subsets of $X$.
Fix any $\lambda \ge 0$ and $c > \inf_{(x, u) \in \Omega} \mathcal{I}(x, u)$. By Theorem~\ref{Theorem_CompleteExactness} it remains to check that there exists $a > 0$ such that $\varphi^{\downarrow}_A(x, u) \le - a$ for any $(x, u) \in S_{\lambda}(c)$ such that $\varphi(x, u) > 0$. Choose any such $(x, u)$ and $(\widehat{x}, \widehat{u}) \in \Omega$. Note that $x(T) \ne x_T$ due to the inequality $\varphi(x, u) > 0$. Define $\Delta x = ( \widehat{x} - x ) / \sigma$ and $\Delta u = ( \widehat{u} - u ) / \sigma$, where
$\sigma = \| \widehat{x} - x \|_{C([0, T]; \mathscr{H})} + \| \widehat{u} - u \|_{L^2((0, T); \mathscr{U})} > 0$. Then $\| (\Delta x, \Delta u) \|_X =
\| \Delta x \|_{C([0, T]; \mathscr{H})} + \| \Delta u \|_{L^2((0, T); \mathscr{U})} = 1$. Due to the linearity of the system and the convexity of the set $U$, for any $\alpha \in [0, \sigma]$ one has $(x + \alpha \Delta x, u + \alpha \Delta x) \in A$, $(x + \alpha \Delta x)(T) = x(T) + \alpha \sigma^{-1} (x_T - x(T))$, since $\widehat{x}(T) = x_T$ by definition. Hence \begin{align*}
\varphi^{\downarrow}_A(x, u) &\le \lim_{\alpha \to +0}
\frac{\varphi(x + \alpha \Delta x, u + \alpha \Delta u) - \varphi(x, u)}{\alpha \| (\Delta x, \Delta u) \|_X} \\
&= \lim_{\alpha \to +0}
\frac{(1 - \alpha \sigma^{-1}) \| x(T) - x_T \|_{\mathscr{H}} - \| x(T) - x_T \|_{\mathscr{H}}}{\alpha}
= - \frac{1}{\sigma} \| x(T) - x_T \|_{\mathscr{H}}. \end{align*} Therefore, it remains to check that there exists $C > 0$ such that for any $(x, u) \in S_{\lambda}(c) \setminus \Omega$ one can find $(\widehat{x}, \widehat{u}) \in \Omega$ satisfying the inequality \begin{equation} \label{PropertyS_EvolutionEquation}
\| x - \widehat{x} \|_{C([0, T]; \mathscr{H})}
+ \| u - \widehat{u} \|_{L^2((0, T); \mathscr{U})} \le C \| x(T) - x_T \|_{\mathscr{H}}. \end{equation} Then $\varphi^{\downarrow}_A(x, u) \le - 1 / C$ for any such $(x, u)$, and the proof is complete.
From \eqref{SolutionViaSemiGroup} and the inequality $\| F_t \| \le \| F_T \|$, $t \in [0, T]$ (see \cite[formula $(4.2.5)$]{TucsnakWeiss}), it follows that for any $(x, u) \in A$ and $(\widehat{x}, \widehat{u}) \in A$ one has
$\| x - \widehat{x} \|_{C([0, T]; \mathscr{H})} \le \| F_T \| \| u - \widehat{u} \|_{L^2((0, T); \mathscr{U})}$. Consequently, it is sufficient to check that there exists $C > 0$ such that for any $(x, u) \in S_{\lambda}(c) \setminus \Omega$ one can find $(\widehat{x}, \widehat{u}) \in \Omega$ satisfying the inequality \begin{equation} \label{MetricReg_InputMap}
\| u - \widehat{u} \|_{L^2((0, T); \mathscr{U})} \le C \| x(T) - x_T \|_{\mathscr{H}}. \end{equation} To this end, fix any $(x_*, u_*) \in \Omega$, and denote by $\mathcal{T} \colon \cl \linhull (U - u_*) \to \cl \linhull F_T(U - u_*)$ the mapping such that $\mathcal{T}(u) = F_T (u)$ for any $u \in \cl \linhull (U - u_*)$. Note that $\mathcal{T}$ is correctly defined, since the operator $F_T$ maps $\cl \linhull (U - u_*)$ to $\cl \linhull F_T(U - u_*)$. Indeed, by definition $F_T(\linhull (U - u_*)) \subseteq \cl \linhull F_T(U - u_*)$. If $u_0 \in \cl \linhull U$, then there exists a sequence $\{ u_n \} \subset \linhull (U - u_*)$ converging to $u_0$. Due to the continuity of $F_T$ the sequence $\{ F_T u_n \}$ converges to $F_T u_0$, which yields $F_T u_0 \in \cl \linhull F_T(U - u_*)$.
Observe that $\mathcal{T}$ is a bounded linear operator between Banach spaces, since the operator $F_T$ is bounded. Furthermore, by \eqref{SolutionViaSemiGroup} for any $u \in U$ one has $F_T(u - u_*) = x(T) - x_T$, which implies that $\mathcal{T}(U - u_*) = F_T(U - u_*) = \mathcal{R}(x_0, T) - x_T$. Therefore, by assumption~\ref{EndPointInterior_Assumpt} one has $0 \in \interior \mathcal{T}(U - u_*)$, since the closed affine hull of $\mathcal{R}(x_0, T)$ coincides with $\cl \linhull F_T(U - u_*) + x_T$ due to the fact that $0 \in F_T(U - u_*)$. Hence by Robinson's theorem (Theorem~\ref{Theorem_Robinson_Ursescu} with $C = U - u_*$, $x^* = 0$, and $y = 0$) there exists $\kappa > 0$ such that \begin{equation} \label{RobinsonUrsescu_InputMap_direct}
\dist\big( u - u_*, \mathcal{T}^{-1}(0) \cap (U - u_*) \big) \le
\kappa \big( 1 + \| u - u_* \|_{L^2((0, T); \mathscr{H})} \big)
\big\| \mathcal{T}(u - u_*) \big\|_{\mathscr{H}}
\quad \forall u \in U. \end{equation} Fix any $(x, u) \in S_{\lambda}(c) \setminus \Omega$ (i.e. $x(T) \ne x_T$). Then taking into account the fact that $\mathcal{T}(u - u_*) = x(T) - x_T$ and utilising inequality \eqref{RobinsonUrsescu_InputMap_direct} one obtains that there exists $v \in U - u_*$ such that $\mathcal{T}(v) = 0$ and \begin{equation} \label{RobinsonUrsescu_InputMap}
\| u - u_* - v \|_{L^2((0, T); \mathscr{H})} \le
2 \kappa \big( 1 + \| u - u_* \|_{L^2((0, T); \mathscr{H})} \big) \big\| x(T) - x_T \big\|_{\mathscr{H}}. \end{equation} Define $\widehat{u} = u_* + v \in U$, and let $\widehat{x}$ be the corresponding solution of \eqref{LinearEvolEq}. Then $\widehat{x}(T) - x_T = \mathcal{T}(\widehat{u} - u_*) = \mathcal{T}(v) = 0$, i.e. $(\widehat{x}, \widehat{u}) \in \Omega$. Note that
$C := \sup_{(x, u) \in S_{\lambda}(c)} 2 \kappa ( 1 + \| u - u_* \|_{L^2((0, T); \mathscr{H})} ) < + \infty$ due to the boundedness of the set $S_{\lambda}(c)$. Consequently, by \eqref{RobinsonUrsescu_InputMap} one for any $(x, u) \in S_{\lambda}(c) \setminus \Omega$ there exists $(\widehat{x}, \widehat{u}) \in \Omega$ such that
$\| u - \widehat{u} \|_{L^2((0, T); \mathscr{H})} \le C \| x(T) - x_T \|_{\mathscr{H}}$, i.e. \eqref{MetricReg_InputMap} holds true, and the proof is complete. \end{proof}
\begin{remark} Let us note that for the validity of the assumption $x_T \in \relint \mathcal{R}(x_0, T)$ in the case $\image(F_T) = \mathscr{H}$ it is sufficient to suppose that $x_T$ belongs to the interior of $\mathcal{R}(x_0, T)$, while in the case $U = L^2((0, T); \mathscr{U})$ this assumption is satisfied iff the image of the input map $F_T$ is closed. \end{remark}
\begin{remark} \label{Remark_ComparisonZuazua} Recall that system \eqref{LinearEvolEq} is called \textit{exactly controllable} using $L^2$-controls in time $T$, if for any initial state $x_0 \in \mathscr{H}$ and for any final state $x_T \in \mathscr{H}$ there exists $u \in L^2((0, T), \mathscr{U})$ such that for the corresponding solution $x$ of \eqref{LinearEvolEq} one has $x(T) = x_T$. It is easily seen that this system is exactly controllable using $L^2$-controls in time $T$ iff the input map $F_T$ is surjective, i.e. $\image(F_T) = \mathscr{H}$. Thus, in particular, in Theorem~\ref{Theorem_Exactness_EvolutionEquations} it is sufficient to suppose that system \eqref{LinearEvolEq} is exactly controllable and $x_T \in \interior \mathcal{R}(x_0, T)$. If, in addition, $\interior U \ne \emptyset$, then it is sufficient to suppose that system \eqref{LinearEvolEq} is exactly controllable and there exists a feasible point $(x_*, u_*)$ of problem \eqref{EvolEqFixedEndPointProblem} such that $u_* \in \interior U$.
The exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{EvolEqFixedEndPointProblem} with
$\theta(x, u, t) = \| u \|_{\mathscr{U}}^2 / 2$ and no constraints on the control inputs (i.e. $U = L^2((0, T); \mathscr{U})$) was proved by Gugat and Zuazua \cite{Zuazua} under the assumption that system \eqref{LinearEvolEq} is exactly controllable, and the control $u$ from the definition of exact controllability satisfies the inequality \begin{equation} \label{ExactControl_Unnecessary}
\| u \|_{L^2((0, T); \mathscr{U})} \le C \big( \| x_0 \| + \| x_T \| \big) \end{equation} for some $C > 0$ independent of $x_0$ and $x_T$. Note that our Theorem~\ref{Theorem_Exactness_EvolutionEquations} significantly generalises and strengthens \cite[Theorem~1]{Zuazua}, since we consider a more general objective function and convex constraints on the control inputs, impose a less restrictive assumption on the input map $F_T$ (instead of exact controllability it is sufficient to suppose that $\image(F_T)$ is closed), and demonstrate that inequality \eqref{ExactControl_Unnecessary} is, in fact, redundant. \end{remark}
Let us also extend Theorem~\ref{Theorem_FixedEndPointProblem_Linear_Global} to the case of optimal control problems for linear evolution equations.
\begin{theorem} \label{Theorem_Exactness_EvolutionEquations_Global} Let all assumptions of Theorem~\ref{Theorem_Exactness_EvolutionEquations} be valid, and suppose that either the set $U$ of admissible control inputs is bounded in $L^2((0, T), \mathscr{U})$ or the function $(x, u) \mapsto \theta(x, u, t)$ is convex for all $t \in [0, T]$. Then the penalty function $\Phi_{\lambda}$ for problem \eqref{EvolEqFixedEndPointProblem} is completely exact on $A$. \end{theorem}
\begin{proof} Suppose at first that the set $U$ is bounded. Recall that the input map $F_t$ continuously maps
$L^2((0, T); \mathscr{U})$ to $\mathscr{H}$ by \cite[Proposition~4.2.2]{TucsnakWeiss} and $\| F_t \| \le \| F_T \|$ for any $t \le T$ (see \cite[formula $(4.2.5)$]{TucsnakWeiss}). Note also that by \cite[Proposition~2.1.2]{TucsnakWeiss}
there exist $\omega \in \mathbb{R}$ and $M_{\omega} \ge 1$ such that $\| \mathbb{T}_t \| \le M_{\omega} e^{\omega t}$ for all $t \ge 0$.
Fix any $(x, u) \in A$. By our assumption there exists $K > 0$ such that $\| u \|_{L^2((0, T), \mathscr{U})} \le K$ for any $u \in U$. Hence
$\| x \|_{C([0, T]; \mathscr{H})} \le M_{\omega} \max_{t \in [0, T]} e^{\omega t} \| x_0 \| + \| F_T \| K$
due to \eqref{SolutionViaSemiGroup}, and the bounds on $\| \mathbb{T}_t \|$ and $\| F_t \|$. Thus, the set $A$ is bounded in $X$. Now, arguing in the same way as in the proof of Theorem~\ref{Theorem_Exactness_EvolutionEquations}, but replacing $S_{\lambda}(c)$ with $A$ and utilising Theorem~\ref{THEOREM_COMPLETEEXACTNESS_GLOBAL} instead of Theorem~\ref{Theorem_CompleteExactness}, one obtains the required result.
Suppose now that the function $(x, u) \mapsto \theta(x, u, t)$ is convex. Then the functional $\mathcal{I}(x, u)$ and the penalty function $\Phi_{\lambda}$ are convex. Hence with the use of the fact that the set $A$ is convex one obtains that any point of local minimum of $\Phi_{\lambda}$ on $A$ is also a point of global minimum of $\Phi_{\lambda}$ on $A$. Furthermore, any inf-stationary point of $\Phi_{\lambda}$ on $A$ is also a point of global minimum of $\Phi_{\lambda}$ on $A$. Indeed, let $(x^*, u^*)$ be and inf-stationary point of $\Phi_{\lambda}$ on $A$. Arguing by reductio ad absurdum, suppose that $(x^*, u^*)$ is not a point of global minimum. Then there exists $(x_0, u_0) \in A$ such that $\Phi_{\lambda}(x_0, u_0) < \Phi_{\lambda}(x^*, u^*)$. By applying the convexity of $\Phi_{\lambda}$ one gets that $$
\Phi_{\lambda}(x^* + \alpha( x_0 - x^* ), u^* + \alpha (u_0 - u^*)) \le
\Phi_{\lambda}(x^*, u^*) + \alpha(\Phi_{\lambda}(x_0, u_0) - \Phi_{\lambda}(x^*, u^*))
\quad \forall \alpha \in [0, 1]. $$ Consequently, one has $$
(\Phi_{\lambda})^{\downarrow}_A(x^*, u^*) \le \liminf_{\alpha \to + 0}
\frac{\Phi_{\lambda}(x^* + \alpha( x_0 - x^* ), u^* + \alpha (u_0 - u^*)) - \Phi_{\lambda}(x^*, u^*)}
{\alpha \| (x^*, u^*) - (x_0, u_0) \|_X} \le
\frac{\Phi_{\lambda}(x_0, u_0) - \Phi_{\lambda}(x^*, u^*)}{\| (x^*, u^*) - (x_0, u_0) \|_X} < 0 $$ which is impossible, since by the definition of inf-stationary point $(\Phi_{\lambda})^{\downarrow}_A(x^*, u^*) \ge 0$.
Similarly, any point of local minimum/inf-stationary point of $\mathcal{I}$ on $\Omega$ is a globally optimal solution of problem \eqref{EvolEqFixedEndPointProblem} due to the convexity of $\mathcal{I}$ and $\Omega$. Therefore, in the convex case the penalty function $\Phi_{\lambda}$ is completely exact on $A$ if and only if it is globally exact. In turn, the global exactness of this function follows from Theorem~\ref{Theorem_Exactness_EvolutionEquations}. \end{proof}
\subsection{Nonlinear Systems: Complete Exactness}
Now we turn to the analysis of nonlinear finite dimensional fixed-endpoint optimal control problems of the form: \begin{equation} \label{FixedEndPointProblem} \begin{split}
{}&\min \: \mathcal{I}(x, u) = \int_0^T \theta(x(t), u(t), t) \, dt \\
{}&\text{subject to } \dot{x}(t) = f(x(t), u(t), t), \quad t \in [0, T], \quad
x(0) = x_0, \quad x(T) = x_T, \quad u \in U. \end{split} \end{equation} Here $\theta \colon \mathbb{R}^d \times \mathbb{R}^m \times [0, T] \to \mathbb{R}$ and $f \colon \mathbb{R}^d \times \mathbb{R}^m \times [0, T] \to \mathbb{R}^d$ are given functions, $x_0, x_T \in \mathbb{R}^d$, and $T > 0$ are fixed, $x \in W^d_{1, p}(0, T)$, $U \subseteq L_q^m(0, T)$ is a nonempty closed set, and $1 \le p, q \le + \infty$.
As in the case of linear problems, we penalise only the terminal constraint $x(T) = x_T$. To this end, define $X = W_{1, p}^d(0, T) \times L_q^m(0, T)$, $M = \{ (x, u) \in X \mid x(T) = x_T \}$, and \begin{equation} \label{Set_A_Nonlinear_FixedEndPoint}
A = \Big\{ (x, u) \in X \Bigm| x(0) = x_0, \: u \in U, \:
\dot{x}(t) = f(x(t), u(t), t) \text{ for a.e. } t \in [0, T] \Big\}. \end{equation} Then problem \eqref{FixedEndPointProblem} can be rewritten as the problem of minimising $\mathcal{I}(x, u)$
subject to $(x, u) \in M \cap A$. As in the previous sections, define $\varphi(x, u) = |x(T) - x_T|$. Then $M = \{ (x, u) \in X \mid \varphi(x, u) = 0 \}$, and one can consider the penalised problem \begin{equation} \label{PenProblem_NonlinearFixedEndPoint} \begin{split}
{}&\min \: \Phi_{\lambda}(x, u) = \mathcal{I}(x, u) + \lambda \varphi(x, u)
= \int_0^T \theta(x(t), u(t), t) \, dt + \lambda |x(T) - x_T| \\
{}&\text{subject to } \dot{x}(t) = f(x(t), u(t), t), \quad t \in [0, T], \quad
x(0) = x_0, \quad u \in U, \end{split} \end{equation} which is a nonlinear free-endpoint optimal control problem.
The nonlinearity of the systems makes an analysis of the exactness of the penalty function $\Phi_{\lambda}(x, u)$ a very challenging problem. Unlike the linear case, it does not seem possible to obtain any easily verifiable conditions for the complete exactness of this function. Therefore, the main goal of this section is to understand what properties the system $\dot{x} = f(x, u, t)$ must have for the penalty function $\Phi_{\lambda}(x, u)$ to be completely exact.
In the linear case, the main assumption ensuring the complete exactness of the penalty function was $x_T \in \relint \mathcal{R}(x_0, T)$. Therefore, it is natural to expect that in the nonlinear case one must also impose some assumptions on the reachable set of the system $\dot{x} = f(x, u, t)$. Moreover, in the linear case we utilised Robinson's theorem, but there are no nonlocal analogues of this theorem in the nonlinear case. Consequently, we must impose an assumption that allows one to avoid the use of this theorem.
Thus, to prove the complete exactness of the penalty function $\Phi_{\lambda}$ in the nonlinear case we need to impose two assumptions on the controlled system $\dot{x} = f(x, u, t)$. The first one does not allow the reachable set of this system to be, roughly speaking, too ``wild'' near the point $x_T$, while the second one ensures that this system is, in a sense, sensitive enough with respect to the control inputs. It should be mentioned that the exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{FixedEndPointProblem} can be proved under a much weaker assumption that imposes some restrictions on the reachable set and sensitivity with respect to the control inputs simultaneously. However, for the sake of simplicity we split this rather complicated assumption in two assumptions that are much easier to understand and analyse.
Denote by $\mathcal{R}(x_0, T) = \{ \xi \in \mathbb{R}^d \mid \exists (x, u) \in A \colon \xi = x(T) \}$ the set that is reachable in time $T$. We obviously suppose that $x_T \in \mathcal{R}(x_0, T)$. Also, we exclude the trivial case when $x_T$ is an isolated point of $\mathcal{R}(x_0, T)$, since in this case the penalty function $\Phi_{\lambda}$ is completely exact on $S_{\lambda}(c)$ for any $c \in \mathbb{R}$ iff $\Phi_{\lambda}$ is bounded below on $A$ due to the fact that in this case $\Omega_{\delta} \setminus \Omega = \emptyset$ for any sufficiently small $\delta > 0$ (see~Remark~\ref{Remark_OmegaDeltaEmpty}).
\begin{definition} One says that the set $\mathcal{R}(x_0, T)$ has \textit{the negative tangent angle property} near $x_T$, if there exist a neighbourhood $\mathcal{O}(x_T)$ of $x_T$ and $\beta > 0$ such that for any $\xi \in \mathcal{O}(x_T) \cap \mathcal{R}(x_0, T)$, $\xi \ne x_T$, there exists a sequence $\{ \xi_n \} \subset \mathcal{R}(x_0, T)$ converging to $\xi$ and such that \begin{equation} \label{NegativeTangentAngleCond}
\left\langle \frac{\xi - x_T}{|\xi - x_T|}, \frac{\xi_n - \xi}{|\xi_n - \xi|} \right\rangle \le - \beta
\quad \forall n \in \mathbb{N}. \end{equation} \end{definition}
One can easily see that if $x_T$ belongs to the interior of $\mathcal{R}(x_0, T)$ or if there exists a neighbourhood $\mathcal{O}(x_T)$ of $x_T$ such that the intersection $\mathcal{O}(x_T) \cap \mathcal{R}(x_0, T)$ is convex, then the set $\mathcal{R}(x_0, T)$ has the negative tangent angle property near $x_T$ (take as $\{ \xi_n \}$ any sequence of points from the segment $\co\{ x_T, \xi \}$ converging to $\xi$ and put $\beta = 1$). However, this property holds true in a much more general case. In particular, $x_T$ can be the vertex of a cusp.
The negative tangent angle property excludes the sets that, roughly speaking, are ``very porous'' near $x_T$ (i.e. sets having an infinite number of ``holes'' in any neighbourhood of $x_T$) or are very wiggly near this point (like the graph of $y = x \sin(1 / x)$ near $(0, 0)$). Furthermore, bearing in mind the equality $$
\big\{ \xi \in \mathbb{R}^d \mid \exists (x, u) \in \Omega_{\delta} \setminus \Omega \colon \xi = x(T) \big\}
= \{ \xi \in \mathcal{R}(x_0, T) \mid 0 < |\xi - x_T| < \delta \}, $$
the definition of the rate of steepest descent, and the fact that $\varphi(x, u) = |x(T) - x_T|$ one can check that for the validity of the inequality $\varphi^{\downarrow}_A(x, u) \le - a$ for all $(x, u) \in \Omega_{\delta} \setminus \Omega$ and some $a, \delta > 0$ it is \textit{necessary} that there exists $\beta > 0$ such that for any $\xi \in \mathcal{R}(x_0, T)$ lying in a neighbourhood of $x_T$ inequality \eqref{NegativeTangentAngleCond} holds true.
Indeed, suppose that $\varphi^{\downarrow}_A(x, u) \le - a$ for all $(x, u) \in \Omega_{\delta} \setminus \Omega$ and some $a, \delta > 0$. Let $\xi \in \mathcal{R}(x_0, T)$ satisfy the inequalities
$0 < |\xi - x_T| < \delta$, and $(x, u) \in \Omega_{\delta} \setminus \Omega$ be such that $x(T) = \xi$. By the definition of $\varphi^{\downarrow}_A(x, u)$ there exists a sequence $\{ (x_n, u_n) \} \subset A$ converging to $(x, u)$ and such that \begin{multline*}
- \frac{2a}{3} \ge \frac{\varphi(x_n, u_n) - \varphi(x, u)}{\| (x_n - x, u_n - u) \|_X}
= \frac{|x_n(T) - x_T| - |x(T) - x_T|}{\| (x_n - x, u_n - u) \|_X} \\
= \frac{1}{\| (x_n - x, u_n - u) \|_X}
\left( \left\langle \frac{x(T) - x_T}{|x(T) - x_T|}, x_n(T) - x(T) \right \rangle + o(|x_n(T) - x(T)| \right) \end{multline*} for all $n \in \mathbb{N}$. Hence with the use of inequality \eqref{SobolevImbedding} one obtains that
$0 < |x_n(T) - x_T| < \delta$ for any sufficient large $n$, and there exists $n_0 \in \mathbb{N}$ such that inequality \eqref{NegativeTangentAngleCond} is satisfied with $\xi_n = x_{n + n_0}(T)$, $n \in \mathbb{N}$, and $\beta = a / 3 C_p$. Thus, the negative tangent angle property is closely related to the validity of assumption \ref{NegativeDescentRateAssumpt} of Theorem~\ref{Theorem_CompleteExactness}.
\begin{definition} \label{Def_SensitivityProperty} Let $K \subset A$ be a given set. One says that the property $(\mathcal{S})$ is satisfied on the set $K$, if there exists $C > 0$ such that for any $(x, u) \in K$ one can find a neighbourhood $\mathcal{O}(x(T)) \subset \mathbb{R}^d$ of $x(T)$ such that for all $\widehat{x}_T \in \mathcal{O}(x(T)) \cap \mathcal{R}(x_0, T)$ there exists a control input $\widehat{u} \in U$ that steers the system from $x(0) = x_0$ to $\widehat{x}_T$ in time $T$, and \begin{equation} \label{SensitivityCondition}
\| u - \widehat{u} \|_q + \| x - \widehat{x} \|_{1, p} \le C | x(T) - \widehat{x}(T) |, \end{equation} where $\widehat{x}$ is a trajectory corresponding to $\widehat{u}$, i.e. $(\widehat{x}, \widehat{u}) \in A$. \end{definition}
Let $K \subset A$ be a given set. Recall that the set $A$ consists of all those pairs $(x, u) \in X$ for which $u \in U$, and $x$ is a solution of $\dot{x} = f(x, u, t)$ with $x(0) = x_0$ (see~\eqref{Set_A_Nonlinear_FixedEndPoint}). Roughly speaking, the property $(\mathcal{S})$ is satisfied on $K$ iff for any $(x, u) \in K$ and any reachable end-point $\widehat{x}_T \in \mathcal{R}(x_0, T)$ lying sufficiently close to $x(T)$ one can reach $\widehat{x}_T$ by slightly changing the control input $u$ in such a way that the corresponding trajectory stays in a sufficiently small neighbourhood of $x(\cdot)$ (more precisely, the magnitude of change of $u$ and
$x$ must be proportional to $| x(T) - \widehat{x}_T |$). Note that the property $(\mathcal{S})$ implicitly appeared in the proofs of Theorems~\ref{Theorem_FixedEndPointProblem_Linear} and \ref{Theorem_Exactness_EvolutionEquations} (cf.~\eqref{ErrorBound_TerminalConstraint} and \eqref{PropertyS_EvolutionEquation}) and was proved with the use of Robinson's theorem.
\begin{remark} \label{Remark_SensitivityProperty} Let the function $(x, u) \mapsto f(x, u, t)$ be locally Lipschitz continuous uniformly for all $t \in (0, T)$. Suppose also that the set $A$ is bounded in $L_{\infty}^d(0, T) \times L_{\infty}^m(0, T)$, i.e. the control inputs and corresponding trajectories of the system are uniformly bounded. By definition
$|x_1(t) - x_2(t)| = \int_0^t | f(x_1(\tau), u_1(\tau), \tau) - f(x_2(\tau), u_2(\tau), \tau) | \, dt$ for any $(x_1, u_1), (x_2, u_2) \in A$. Therefore, due to the boundedness of $A$ and the Lipschitz continuity of $f$ there exists $L > 0$ such that $$
|x_1(t) - x_2(t)|
\le L \int_0^t |x_1(\tau) - x_2(\tau)| d \tau + L \int_0^t |u_1(\tau) - u_2(\tau)| \, d \tau
\le L \int_0^t |x_1(\tau) - x_2(\tau)| d \tau + L T^{1/q'} \| u_1 - u_2 \|_q $$ for all $t \in [0, T]$. Hence with the use of the Gr\"{o}nwall-Bellman inequality one can easily check that there exists
$L_1 > 0$ such that $\| x_1 - x_2 \|_{\infty} \le L_1 \| u_1 - u_2 \|_q$ for any $(x_1, u_1), (x_2, u_2) \in A$. Then by applying the inequality $$
|\dot{x}_1(t) - \dot{x}_2(t)| \le \big| f(x_1(t), u_1(t), t) - f(x_2(t), u_2(t), t) \big|
\le L |x_1(t) - x_2(t)| + L |u_1(t) - u_2(t)| $$ and H\"{o}lder's inequality (here we suppose that $q \ge p$) one obtains that there exists $L_2 > 0$ such that
$\| x_1 - x_2 \|_{1, p} \le L_2 \| u_1 - u_2 \|_q$ for all $(x_1, u_1), (x_2, u_2) \in A$. In other words, the map
$u \mapsto x_u$, where $x_u$ is a solution of $\dot{x} = f(x, u, t)$ with $x(0) = x_0$, is Lipschitz continuous on $U$. Therefore, under the assumptions of this remark inequality \eqref{SensitivityCondition} in the definition of the property $(\mathcal{S})$ can be replaced with the inequality $\| u - \widehat{u} \|_q \le C | x(T) - \widehat{x}(T) |$. \end{remark}
A detailed analysis of the property $(\mathcal{S})$ lies beyond the scope of this paper. Here we only note that the property $(\mathcal{S})$ is, in essence, a reformulation of the assumption that the mapping $u \mapsto x_u(T)$ is \textit{metrically regular} on the set $K$ (here $x_u$ is a solution of $\dot{x} = f(x, u, t)$ with $x(0) = x_0$). Thus, it seems possible to apply general results on metric regularity \cite{Aze,Cominetti,Ioffe,Dmitruk} to verify whether the property $(\mathcal{S})$ is satisfied in particular cases. Our aim is to show that this property along with the negative tangent angle property ensures that the penalty function $\Phi_{\lambda}$ for fixed-endpoint problem \eqref{FixedEndPointProblem} is completely exact. Denote by $\mathcal{I}^*$ the optimal value of this problem.
\begin{theorem} \label{Theorem_FixedEndPointProblem_NonLinear} Let the following assumptions be valid: \begin{enumerate} \item{$\theta$ is continuous and differentiable in $x$ and $u$, and the functions $\nabla_x \theta$, $\nabla_u \theta$, and $f$ are continuous; }
\item{either $q = + \infty$ or $\theta$ and $\nabla_x \theta$ satisfy the growth condition of order $(q, 1)$, $\nabla_u \theta$ satisfies the growth condition of order $(q - 1, q')$; }
\item{there exists a globally optimal solution of problem \eqref{FixedEndPointProblem};}
\item{the set $\mathcal{R}(x_0, T)$ has the negative tangent angle property near $x_T$;}
\item{there exist $\lambda_0 > 0$, $c > \mathcal{I}^*$, and $\delta > 0$ such that the set $S_{\lambda_0}(c) \cap \Omega_{\delta}$ is bounded in $W^d_{1, p}(0, T) \times L_q^m(0, T)$, the property $(\mathcal{S})$ is satisfied on $S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$, and the function $\Phi_{\lambda_0}(x, u)$ is bounded below on $A$. } \end{enumerate} Then there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ the penalty function $\Phi_{\lambda}$ for problem \eqref{FixedEndPointProblem} is completely exact on $S_{\lambda}(c)$. \end{theorem}
\begin{proof} As was noted in the proof of Theorem~\ref{Theorem_FixedEndPointProblem_Linear}, the growth conditions on the function $\theta$ and its derivatives ensure that the functional $\mathcal{I}$ is Lipschitz continuous on any bounded open set containing the set $S_{\lambda_0}(c) \cap \Omega_{\delta}$. The continuity of the penalty term
$\varphi(x, u) = |x(T) - x_T|$ can also be verified in the same way as in the proof of Theorem~\ref{Theorem_FixedEndPointProblem_Linear}.
Let us check that the set $A$ is closed. Indeed, choose any sequence $\{ (x_n, u_n) \} \subset A$ converging to some $(x_*, u_*) \in X$. Recall that the set $U$ is closed and by definition $\{ u_n \} \subset U$. Therefore $u_* \in U$. By inequality \eqref{SobolevImbedding} the sequence $x_n$ converges to $x_*$ uniformly on $[0, T]$, which, in particular, implies that $x_*(0) = x_0$. Note also that by definition $\{ \dot{x}_n \}$ converges to $\dot{x}_*$ in $L_p^d(0, T)$, while $\{ u_n \}$ converges to $u_*$ in $L_q^m(0, T)$. As is well known (see, e.g. \cite[Theorem~2.20]{FonsecaLeoni}), one can extract subsequences $\{ \dot{x}_{n_k} \}$ and $\{ u_{n_k} \}$ that converge almost everywhere. From the fact that $(x_{n_k}, u_{n_k}) \in A$ it follows that $\dot{x}_{n_k}(t) = f(x_{n_k}(t), u_{n_k}(t), t)$ for a.e. $t \in (0, T)$. Consequently, passing to the limit as $k \to \infty$ with the use of the continuity of $f$ one obtains that $\dot{x}_*(t) = f(x_*(t), u_*(t), t)$ for a.e. $t \in (0, T)$, i.e. $(x_*, u_*) \in A$, and the set $A$ is closed. Thus, by Theorem~\ref{Theorem_CompleteExactness} it remains to check that there exists $0 < \eta \le \delta$ such that $\varphi^{\downarrow}_A(x, u) \le - a$ for any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\eta} \setminus \Omega)$.
Let $0 < \eta \le \delta$ be arbitrary, and fix $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\eta} \setminus \Omega)$. By definition one has $0 < \varphi(x, u) = |x(T) - x_T| < \eta$. Decreasing $\eta$, if necessary, and utilising the negative tangent angle property one obtains that there exist $\beta > 0$ (independent of $(x, u)$) and a sequence $\{ \xi_n \} \subset \mathcal{R}(x_0, T)$ converging to $x(T)$ such that \begin{equation} \label{NegativeTangentAngle}
\left\langle \frac{x(T) - x_T}{|x(T) - x_T|}, \frac{\xi_n - x(T)}{|\xi_n - x(T)|} \right\rangle \le - \beta
\quad \forall n \in \mathbb{N}. \end{equation} By applying the property $(\mathcal{S})$ one obtains that there exists $C > 0$ (independent of $(x, u)$) such that for any sufficiently large $n \in \mathbb{N}$ one can find $(x_n, u_n) \in A$ satisfying the inequality \begin{equation} \label{SensitivityCond_Sequence}
\sigma_n := \| u - u_n \|_q + \| x - x_n \|_{1, p} \le C |x(T) - x_n(T)| \end{equation} and such that $x_n(T) = \xi_n$.
By the definition of rate of steepest descent one has $$
\varphi^{\downarrow}_A(x, u)
\le \liminf_{n \to \infty} \frac{\varphi(x_n, u_n) - \varphi(x, u)}{\sigma_n}. $$ Taking into account the equality $$
\varphi(x_n, u_n) - \varphi(x, u) =
\left\langle \frac{x(T) - x_T}{|x(T) - x_T|}, \xi_n - x(T) \right\rangle + o(|\xi_n - x(T)|), $$
where $o(|\xi_n - x(T)|) / |\xi_n - x(T)| \to 0$ as $n \to \infty$ and inequality \eqref{NegativeTangentAngle} one obtains that $$
\varphi^{\downarrow}_A(x, u) \le \liminf_{n \to \infty}
\left( - \beta \frac{|\xi_n - x(T)|}{\sigma_n} + \frac{o(|\xi_n - x(T)|)}{\sigma_n} \right). $$
By applying the inequality $|x(T) - x_n(T)| \le T^{1/p'} \| \dot{x} - \dot{x}_n \|_p \le T^{1/p'} \sigma_n$ one gets that $o(|\xi_n - x(T)|) / \sigma_n \to 0$ as $n \to \infty$. Hence with the use of inequality \eqref{SensitivityCond_Sequence} one obtains that $\varphi^{\downarrow}_A(x, u) \le - \beta / C$, and the proof is complete. \end{proof}
\begin{remark} It is worth noting that in Example~\ref{Example_EndPoint_NotRelInt}, $(i)$ the functions $\theta$ and $f$ satisfy all assumptions of Theorem~\ref{Theorem_FixedEndPointProblem_NonLinear}, $(ii)$ there exists a globally optimal solution, $(iii)$ the set $\mathcal{R}(x_0, T)$ has the negative tangent angle property near $x_T$, and $(iv)$ the set $A$ is bounded and the penalty function $\Phi_{\lambda}$ is bounded below on $A$ for any $\lambda \ge 0$. However, this penalty function is not globally exact. Therefore, by Theorem~\ref{Theorem_FixedEndPointProblem_NonLinear} one can conclude that in this example the property $(\mathcal{S})$ is not satisfied on $S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ for any $\lambda_0 \ge 0$, $c > \mathcal{I}^*$, and $\delta > 0$, when $x_T = 0$. Arguing in a similar way to the proof of Theorem~\ref{Theorem_Exactness_EvolutionEquations} and utilising Robinson's theorem one can check that the property $(\mathcal{S})$ is satisfied on $S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ for some $\lambda_0 \ge 0$, $c > \mathcal{I}^*$, and $\delta > 0$, provided $x_T \in \{ 0 \} \times (0, 1)$. Thus, although the property $(\mathcal{S})$ might seem independent of the end-point $x_T$, the validity of this property on the set $S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ depends on the point $x_T$ and, in particular, its location in the reachable set $\mathcal{R}(x_0, T)$. \end{remark}
\subsection{Nonlinear Systems: Local Exactness}
Although Theorem~\ref{Theorem_FixedEndPointProblem_NonLinear} gives a general understanding of sufficient conditions for the complete exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{FixedEndPointProblem}, its assumptions cannot be readily verified for any particular problem. Therefore, it is desirable to have at least verifiable sufficient conditions for the \textit{local} exactness of this penalty function. Our aim is to show a connection between the local exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{FixedEndPointProblem} and the complete controllability of the corresponding linearised system. This result serves as an illuminating example of how one can apply Theorems~\ref{Theorem_LocalExactness} and \ref{Theorem_LocalErrorBound} to verify the local exactness of a penalty function.
Recall that the linear system \begin{equation} \label{ExactControllability_Def}
\dot{x}(t) = A(t) x(t) + B(t) u(t) \end{equation} with $x \in \mathbb{R}^d$ and $u \in \mathbb{R}^m$ is called \textit{completely controllable} using $L^q$-controls in time $T$, if for any initial state $x_0 \in \mathbb{R}^d$ and any finial state $x_T \in \mathbb{R}^d$ one can find $u \in L_q^m(0, T)$ such that there exists an absolutely continuous solution $x$ of \eqref{ExactControllability_Def} with $x(0) = x_0$ defined on $[0, T]$ and satisfying the equality $x(T) = x_T$.
\begin{theorem} \label{Theorem_LocalExactness_TerminalConstraint} Let $U = L_q^m(0, T)$, $q \ge p$, and $(x^*, u^*)$ be a locally optimal solution of problem \eqref{FixedEndPointProblem}. Let also the following assumptions be valid: \begin{enumerate} \item{$\theta$ and $f$ are continuous, differentiable in $x$ in $u$, and the functions $\nabla_x \theta$, $\nabla_u \theta$, $\nabla_x f$, and $\nabla_u f$ are continuous; \label{Assumpt_Smoothness_LocalEx_TerminConstr}}
\item{either $q = + \infty$ or $\theta$ and $\nabla_x \theta$ satisfy the growth condition of order $(q, 1)$, $\nabla_u \theta$ satisfies the growth condition of order $(q - 1, q')$, $f$ and $\nabla_x f$ satisfy the growth condition of order $(q / p, p)$, and $\nabla_u f$ satisfies the growth condition of order $(q / s, s)$ with $s = qp / (q - p)$ in the case $q > p$, and $\nabla_u f$ does not depend on $u$ in the case $q = p$; }
\item{the linearised system $$
\dot{h}(t) = A(t) h(t) + B(t) v(t) $$ with $A(t) = \nabla_x f(x^*(t), u^*(t), t)$ and $B(t) = \nabla_u f(x^*(t), u^*(t), t)$ is completely controllable using $L^q$-controls in time $T$. \label{Assumpt_LinearisedCompleteControllability}} \end{enumerate} Then the penalty function $\Phi_{\lambda}$ for problem \eqref{FixedEndPointProblem} is locally exact at $(x^*, u^*)$. \end{theorem}
\begin{proof} As was noted in the proof of Theorem~\ref{Theorem_FixedEndPointProblem_Linear}, the growth conditions on the function $\theta$ and its derivatives ensure that the functional $\mathcal{I}$ is Lipschitz continuous on any bounded subset of $X$ (in particular, in any bounded neighbourhood of $(x^*, u^*)$).
For any $(x, u) \in X$ define $$
F(x, u) = \begin{pmatrix} \dot{x}(\cdot) - f(x(\cdot), u(\cdot), \cdot) \\ x(T) \end{pmatrix},
\quad K = \begin{pmatrix} 0 \\ x_T \end{pmatrix}. $$ Our aim is to apply Theorem~\ref{Theorem_LocalErrorBound} with $C = \{ (x, u) \in X \mid x(0) = x_0 \}$ to the operator $F$. Then one gets that there exists $a > 0$ such that $$
\dist(F(x, u), K) \ge a \dist( (x, u), F^{-1}(K) \cap C) $$ for any $(x, u) \in C$ in a neighbourhood of $(x^*, u^*)$. Hence taking into account the facts that
$\dist(F(x, u), K) = |x(T) - x_T| = \varphi(x, u)$ for any $(x, u) \in A$, and $F^{-1}(K) \cap C$ coincides with the feasible set $\Omega$ of problem \eqref{FixedEndPointProblem} one obtains that $\varphi(x) \ge a \dist((x, u), \Omega)$ for any $(x, u) \in A$ in a neighbourhood of $(x^*, u^*)$. Then by applying Theorem~\ref{Theorem_LocalExactness} one obtains the desired result.
By Theorem~\ref{Theorem_DiffNemytskiiOperator} (see Appendix~B) the growth conditions on the function $f$ and its derivatives guarantee that the nonlinear operator $F$ maps $X$ to $L^d_p(0, T) \times \mathbb{R}^d$, is strictly differentiable at $(x^*, u^*)$, and its Fr\'{e}chet derivative at this point has the form $$
DF(x^*, u^*)[h, v] = \begin{pmatrix} \dot{h}(\cdot) - A(\cdot) h(\cdot) - B(\cdot) v(\cdot) \\ h(T) \end{pmatrix}, $$ where $A(t) = \nabla_x f(x^*(t), u^*(t), t)$ and $B(t) = \nabla_u f(x^*(t), u^*(t), t)$. Observe also that $C - (x^*, u^*) = \{ (h, v) \in X \mid h(0) = 0 \}$ and $K - F(x^*, u^*) = (0, 0)^T$, since $x^*(0) = x_0$ and $x^*(T) = x_T$ by definition. Consequently, the regularity condition \eqref{MetricRegCond} from Theorem~\ref{Theorem_LocalErrorBound} takes the form: for any $\omega \in L^d_p(0, T)$ and $h_T \in \mathbb{R}^d$ there exists $(h, v) \in X$ such that \begin{equation} \label{MetricRegCond_TerminalConstraint}
\dot{h}(t) = A(t) h(t) + B(t) v(t) + \omega(t) \quad \text{for a.e. } t \in (0, T), \quad
h(0) = 0, \quad h(T) = h_T. \end{equation} Let us check that this condition holds true. Then by applying Theorem~\ref{Theorem_LocalErrorBound} we arrive at the required result.
Fix any $\omega \in L^d_p(0, T)$ and $h_T \in \mathbb{R}^d$. Let $h_1$ be an absolutely continuous solution of the equation $\dot{h}_1(t) = A(t) h_1(t) + \omega(t)$ with $h_1(0) = 0$ defined on $[0, T]$ (the existence of such solution follows from \cite[Theorem~1.1.3]{Filippov}). From the fact that $\nabla_x f$ satisfies the growth condition of order $(q / p, p)$ in the case $q < + \infty$ it follows that $A(\cdot) \in L_p^{d \times d}(0, T)$ (in the case $q = + \infty$ one obviously has $A(\cdot) \in L_{\infty}^{d \times d}(0, T)$). Hence $h_1 \in W_{1, p}^d(0, T)$, since $h_1$ is absolutely continuous and the right-hand side of the equality $\dot{h}_1(t) = A(t) h_1(t) + \omega(t)$ belongs to $L^d_p(0, T)$.
Let $v \in L_q^m(0, T)$ be such that an absolutely continuous solution $h_2$ of the system $\dot{h}_2(t) = A(t) h_2(t) + B(t) v(t)$ with $h_2(0) = 0$ satisfies the equality $h_2(T) = - h_1(T) + h_T$. Note that such $v$ exists due to the complete controllability assumption. By applying the fact that $\nabla_u f$ satisfies the growth condition of order $(q / s, s)$ one obtains that $B(\cdot) \in L_s^{d \times m}(0, T)$ in the case $p < q < + \infty$, which with the use of H\"{o}lder inequality implies that $B(\cdot) v(\cdot) \in L_p^d(0, T)$ (in the case $q = + \infty$ one obviously has $B(\cdot) v(\cdot) \in L_{\infty}^d(0, T)$, while in the case $p = q < + \infty$ one has $B(\cdot) \in L_{\infty}^{d \times m}(0, T)$, since $\nabla_u f$ does not depend on $u$, and $B(\cdot) v(\cdot) \in L_p^d(0, T)$). Therefore $h_2 \in W_{1, p}^d(0, T)$ by virtue of the fact that the right-hand side of $\dot{h}_2(t) = A(t) h_2(t) + B(t) v(t)$ belongs to $L^d_p(0, T)$. It remains to note that the pair $(h_1 + h_2, v)$ belongs to $X$ and satisfies \eqref{MetricRegCond_TerminalConstraint}. \end{proof}
\begin{remark} From the proof of Theorem~\ref{Theorem_LocalExactness_TerminalConstraint} it follows that under the assumption of this theorem the penalty function $$
\Psi_{\lambda}(x, u) = \mathcal{I}(x, u) + \lambda \bigg[ |x(T) - x_T|
+ \Big( \int_0^T \big| \dot{x}(t) - f(x(t), u(t), t) \big|^p \, dt \Big)^{1 / p} \bigg] $$ is locally exact at $(x^*, u^*)$, i.e. $(x^*, u^*)$ is a point of local minimum of this penalty function on the set $\{ (x, u) \in X \mid x(0) = x_0 \}$ for any sufficiently large $\lambda$. \end{remark}
\begin{remark} Let us note that Theorem~\ref{Theorem_LocalExactness_TerminalConstraint} can be extended to the case of problems with convex constraints on control inputs, but in this case the complete controllability assumption must be replaced by a much more restrictive assumption. Namely, let $U \subset L_q^m(0, T)$ be a closed convex set, $(x^*, u^*)$ be a locally optimal solution of problem \eqref{FixedEndPointProblem}, and $\dot{h}(t) = A(t) h(t) + B(t) v(t)$ be the corresponding linearised system. Define $C = \{ (x, u) \in X \mid x(0) = x_0, \: u \in U \}$ and $K = (0, x_T)^T$. One can easily see that in this case the regularity condition \eqref{MetricRegCond} takes the form: for any $\omega \in L^d_p(0, T)$ and $h_T \in \mathbb{R}^d$ there exists $(h, v) \in X$ such that $v \in \cone(U - u^*)$ and $$
\dot{h}(t) = A(t) h(t) + B(t) v(t) + \omega(t) \quad \text{for a.e. } t \in (0, T), \quad
h(0) = 0, \quad h(T) = h_T, $$ where $\cone(U - u^*) = \bigcup_{\alpha \ge 0} \alpha(U - u^*)$ is the cone generated by the set $U - u^*$. If $u^* \in \interior U$, then $\cone(U - u^*) = L^m_q(0, T)$, and this regularity condition is equivalent to the complete controllability of the linearised system. However, if $u^* \notin \interior U$, then one must suppose that for any initial state $x_0 \in \mathbb{R}^d$ and for any finial state $x_T \in \mathbb{R}^d$ one can find $u \in \cone(U - u^*)$ such that there exists an absolutely continuous solution $x$ of \eqref{ExactControllability_Def} with $x(0) = x_0$ defined on $[0, T]$ and satisfying the equality $x(T) = x_T$, i.e. the linearised system must be completely controllable using control inputs from $\cone(U - u^*)$. If this assumption is satisfied, then arguing in the same way as in the proof of Theorem~\ref{Theorem_LocalExactness_TerminalConstraint} one can verify that the penalty function $\Phi_{\lambda}$ for problem \eqref{FixedEndPointProblem} is locally exact at $(x^*, u^*)$. \end{remark}
It should be noted that the complete controllability of the linearised system is \textit{not} necessary for the local exactness of the penalty function $\Phi_{\lambda}(x, u)$, as the following simple example shows.
\begin{example} Let $d = m = 1$ and $p = q = 2$. Consider the following fixed-endpoint optimal control problem: \begin{equation} \label{Ex_DegenerateLinearisation}
\min \mathcal{I}(u) = - \int_0^T u(t)^2 dt \quad
\text{s.t.} \quad \dot{x}(t) = x(t) + u(t)^2, \quad t \in [0, T], \quad x(0) = x(T) = 0, \quad u \in L^2(0, T). \end{equation} Solving the differential equation one obtains that $x(t) = \int_0^t e^{t - \tau} u(\tau)^2 d \tau$ for all $t \in [0, T]$, which implies that the only feasible point of this problem is $(x^*, u^*)$ with $x^*(t) \equiv 0$ and $u^*(t) = 0$ for a.e. $t \in [0, T]$. Thus, $(x^*, u^*)$ is a globally optimal solution of this problem. The linearised system at this point has the form $\dot{h} = h$. Clearly, it is not completely controllable, which renders Theorem~\ref{Theorem_LocalExactness_TerminalConstraint} inapplicable. Let us show that, nevertheless, the penalty function $\Phi_{\lambda}$ for problem \eqref{Ex_DegenerateLinearisation} is globally exact.
Indeed, in this case the penalised problem has the form $$
\min \Phi_{\lambda}(x, u) = - \int_0^T u(t)^2 dt + \lambda |x(T)| \quad
\text{s.t.} \quad \dot{x}(t) = x(t) + u(t)^2, \quad t \in [0, T], \quad x(0) = 0, \quad u \in U. $$ With the use of the fact that $x(t) = \int_0^t e^{t - \tau} u(\tau)^2 d \tau$ one gets that $$
\Phi_{\lambda}(x, u) = - \int_0^T u(t)^2 dt + \lambda \int_0^T e^{T-t} u(t)^2 \, dt
\ge - \int_0^T u(t)^2 dt + \lambda \int_0^T u(t)^2 \, dt $$ for any $u \in U$. Therefore, for all $\lambda \ge 1$ one has $\Phi_{\lambda}(x, u) \ge 0 = \Phi_{\lambda}(x^*, u^*)$ for any feasible point of the penalised problem, i.e. the penalty function $\Phi_{\lambda}$ for problem \eqref{Ex_DegenerateLinearisation} is globally exact. \end{example}
\begin{remark} As Theorem~\ref{Theorem_LocalExactness_TerminalConstraint} demonstrates, the local exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{FixedEndPointProblem} is implied by the complete controllability of the corresponding linearised system. It should be noted that a similar result can be proved in the case of complete exactness, but one must assume some sort of \textit{uniform} complete controllability of linearised systems.
A definition of uniform complete controllability can be be given in the following way. With the use of the open mapping theorem (see~\cite[formula~$(0.2)$]{Ioffe}) one can check that if the linear system \begin{equation} \label{LinSys_UniformExactControllability}
\dot{x}(t) = A(t) x(t) + B(t) u(t), \end{equation} is completely controllable using $L^q$-controls, then there exists $C > 0$ such that for any $x_T$ one can find
$u \in L^m_q(0, T)$ with $\| u \|_q \le C |x_T|$ such that for the corresponding solution $x(\cdot)$ of
\eqref{LinSys_UniformExactControllability} with $x(0) = 0$ one has $x(T) = x_T$. In other words, one can steer the state of system \eqref{LinSys_UniformExactControllability} from the origin to any point $x_T$ in time $T$ with the use of a control input whose $L^q$-norm is proportional to $|x_T|$. Denote the greatest lower bound of all such $C$ by $C_T(A(\cdot), B(\cdot))$. In the case when system \eqref{LinSys_UniformExactControllability} is not completely controllable we put $C_T(A(\cdot), B(\cdot)) = + \infty$. Then one can say that the nonlinear system $\dot{x} = f(x, u, t)$ is \textit{uniformly completely controllable in linear approximation} on a set $K \subseteq A$, if there exists $C > 0$ such that $C_T( \nabla_x f(x(\cdot), u(\cdot), \cdot), \nabla_u f(x(\cdot), u(\cdot), \cdot) ) \le C$ for any $(x, u) \in K$. With the use of general results on nonlocal metric regularity\cite{Dmitruk} one can check that under some natural assumptions on the function $f$ uniform complete controllability in linear approximation on a set $K \subseteq A$ guarantees that the property $(\mathcal{S})$ is satisfied on this set, provided $U = L^m_q(0, T)$. Hence, by applying Theorem~\ref{Theorem_FixedEndPointProblem_NonLinear} one can prove that uniform complete controllability in linear approximation of the nonlinear system $\dot{x} = f(x, u, t)$ implies that the penalty function $\Phi_{\lambda}$ for problem \eqref{FixedEndPointProblem} is completely exact. A detailed proof of this result lies beyond the scope of this paper, and we leave it to the interested reader. \end{remark}
\subsection{Variable-Endpoint Problems}
Let us briefly outline how the main results on the exact penalisation of terminal constraints from previous sections (in particular, Theorems~\ref{Theorem_LocalExactness_TerminalConstraint} and \ref{Theorem_FixedEndPointProblem_Linear}) can be extended to the case of variable-endpoint problems of the form \begin{equation} \label{VariableEndPointPenaltyProblem} \begin{split}
&\min \: \mathcal{I}(x, u) = \int_0^T \theta(x(t), u(t) t) \, dt + \zeta(x(T)) \quad
\text{subject to} \quad \dot{x}(t) = f(x(t), u(t), t), \quad t \in [0, T], \\
&x(0) = x_0, \quad g_i(x(T)) \le 0 \quad \forall i \in I, \quad g_k(x(T)) = 0 \quad \forall k \in J,
\quad u \in U. \end{split} \end{equation} Here $\theta \colon \mathbb{R}^d \times \mathbb{R}^m \times [0, T] \to \mathbb{R}$, $\zeta \colon \mathbb{R}^d \to \mathbb{R}$, $f \colon \mathbb{R}^d \times \mathbb{R}^m \times [0, T] \to \mathbb{R}^d$, and $g_i \colon \mathbb{R}^d \to \mathbb{R}$ are given functions, $i \in I \cup J$, $I = \{ 1, \ldots, l_1 \}$, $J = \{ l_1 + 1, \ldots, l_2 \}$, $x_0 \in \mathbb{R}^d$, and $T > 0$ are fixed, $x \in W^d_{1, p}(0, T)$, $U \subseteq L_q^m(0, T)$ is a nonempty closed set, and $1 \le p, q \le + \infty$.
We penalise only the endpoint constraints. To this end, define $X = W_{1, p}^d(0, T) \times L_q^m(0, T)$, $M = \{ (x, u) \in X \mid g_i(x(T)) \le 0, \: i \in I, \: g_k(x(T)) = 0, \: k \in J \}$, and $$
A = \Big\{ (x, u) \in X \Bigm| x(0) = x_0, \: u \in U, \:
\dot{x}(t) = f(x(t), u(t), t) \text{ for a.e. } t \in [0, T] \Big\}. $$ Then problem \eqref{VariableEndPointPenaltyProblem} can be rewritten as the problem of minimising $\mathcal{I}(x, u)$ subject to $(x, u) \in M \cap A$. Define $$
\varphi(x, u) = \sum_{i \in I} \max\{ g_i(x(T)), 0 \} + \sum_{k \in J} |g_k(x(T))|. $$ Then $M = \{ (x, u) \in X \mid \varphi(x, u) = 0 \}$, and one can consider the penalised problem \begin{equation*} \begin{split}
{}&\min \: \Phi_{\lambda}(x, u)
= \int_0^T \theta(x(t), u(t), t) \, dt + \zeta(x(T))
+ \lambda \Big( \sum_{i \in I} \max\{ g_i(x(T)), 0 \} + \sum_{k \in J} |g_k(x(T))| \Big) \\
{}&\text{subject to } \dot{x}(t) = f(x(t), u(t), t), \quad t \in [0, T], \quad
x(0) = x_0, \quad u \in U, \end{split} \end{equation*} which is a nonlinear free-endpoint optimal control problem.
For any $x \in \mathbb{R}^d$ denote $I(x) = \{ i \in I \mid g_i(x) = 0 \}$. Let the functions $g_i$, $i \in I \cup J$, be differentiable. Recall that one says that the Mangasarian-Fromovitz constraint qualifications (MFCQ) holds at a point $x_T \in \mathbb{R}^d$, if the gradients $\nabla g_k(x_T)$, $k \in J$, are linearly independent, and there exists $h \in \mathbb{R}^d$ such that $\langle \nabla g_k(x_T), h \rangle = 0$ for any $k \in J$, and $\langle \nabla g_i(x_T), h \rangle < 0$ for any $i \in I(x_T)$. Let us show that the complete controllability of the linearised system along with MFCQ guarantee the local exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{VariableEndPointPenaltyProblem}
\begin{theorem} Let $U = L_q^m(0, T)$, $q \ge p$, and $(x^*, u^*)$ be a locally optimal solution of problem \eqref{VariableEndPointPenaltyProblem}. Suppose also that assumptions~\ref{Assumpt_Smoothness_LocalEx_TerminConstr}--\ref{Assumpt_LinearisedCompleteControllability} of Theorem~\ref{Theorem_LocalExactness_TerminalConstraint} are satisfied, $\zeta$ is locally Lipschitz continuous, the functions $g_i$, $i \in I \cup J$ are continuously differentiable in a neighbourhood of $x^*(T)$, and MFCQ holds true at the point $x^*(T)$. Then the penalty function $\Phi_{\lambda}$ for problem \eqref{VariableEndPointPenaltyProblem} is locally exact at $(x^*, u^*)$. \end{theorem}
\begin{proof} For any $(x, u) \in X$ define $$
F(x, u) = \begin{pmatrix} \dot{x}(\cdot) - f(x(\cdot), u(\cdot), \cdot) \\ g(x(T)) \end{pmatrix},
\quad K = \begin{pmatrix} 0 \\ \mathbb{R}_{-}^{l_1} \times \{ \mathbf{0}_{l_2 - l_1} \} \end{pmatrix}, $$ where $g(\cdot) = (g_1(\cdot), \ldots, g_{l_2}(\cdot))^T$, $\mathbb{R}_{-} = (- \infty, 0]$ and $\mathbf{0}_{l_2 - l_1}$ is the zero vector of dimension $l_2 - l_1$. Let us apply Theorem~\ref{Theorem_LocalErrorBound} with $C = \{ (x, u) \in X \mid x(0) = x_0 \}$ to the operator $F$. Then arguing in the same way as in the proof of Theorem~\ref{Theorem_LocalExactness_TerminalConstraint} one arrives at the required result.
With the use of Theorem~\ref{Theorem_DiffNemytskiiOperator} (see Appendix~B) one obtains that the nonlinear operator $F$ maps $X$ to $L^d_p(0, T) \times \mathbb{R}^{l_2}$, is strictly differentiable at $(x^*, u^*)$, and its Fr\'{e}chet derivative at this point has the form $$
DF(x^*, u^*)[h, v] =
\begin{pmatrix} \dot{h}(\cdot) - A(\cdot) h(\cdot) - B(\cdot) v(\cdot) \\ \nabla g(x^*(T)) h(T) \end{pmatrix}, \quad
A(t) = \nabla_x f(x^*(t), u^*(t), t), \quad B(t) = \nabla_u f(x^*(t), u^*(t), t). $$ Hence bearing in mind the fact that $C - (x^*, u^*) = \{ (h, v) \in X \mid h(0) = 0 \}$, since $x^*(0) = x_0$, one gets that the regularity condition \eqref{MetricRegCond} from Theorem~\ref{Theorem_LocalErrorBound} takes the form $0 \in \core K(x^*, u^*)$ with \begin{equation} \label{VariableEndPoint_RegularityCone}
K(x^*, u^*) = \left\{ \begin{pmatrix} \dot{h}(\cdot) - A(\cdot) h(\cdot) - B(\cdot) v(\cdot) \\
\nabla g(x^*(T)) h(T) \end{pmatrix} -
\begin{pmatrix} 0 \\ K_0 \end{pmatrix} \Biggm| (h, v) \in X, \: h(0) = 0 \right\}, \end{equation} where $K_0 = (\mathbb{R}_{-}^{l_1} - (g_1(x^*(T)), \ldots, g_{l_1}(x^*(T)))^T \times \{ \mathbf{0}_{l_2 - l_1} \}$. Let us check that this condition is satisfied.
Indeed, denote $g_J(\cdot) = (g_{l_1 + 1}(\cdot), \ldots g_{l_2}(\cdot))^T$. By MFCQ the matrix $\nabla g_J(x^*(T))$ has full row rank. Therefore, by the open mapping theorem (see~\cite[formula~$(0.2)$]{Ioffe}) there exists $\eta > 0$ such that for any $y_2 \in \mathbb{R}^{l_2 - l_1}$ one can find $h_2 \in \mathbb{R}^d$ with $|h_2| \le \eta |y_2|$ satisfying the equality $\nabla g_J(x^*(T)) h_2 = y_2$.
Fix any $\omega \in L^d_p(0, T)$, $r_2 > 0$, and $y_2 \in B(\mathbf{0}_{l_2 - l_1}, r_2)$. Then there exists
$h_2 \in \mathbb{R}^d$ with $|h_2| \le \eta r_2$ such that $\nabla g_J(x^*(T)) h_2 = y_2$. By MFCQ there exists $h_1 \in \mathbb{R}^d$ such that $\nabla g_J(x^*(T)) h_1 = 0$ and $\langle \nabla g_i(x^*(T)), h_1 \rangle < 0$ for any $i \in I(x^*(T))$. Taking into account the fact that $g_i(x^*(T)) < 0$ for any $i \notin I(x^*(T))$ one obtains that there exists $\alpha > 0$ such that $\langle \nabla g_i(x^*(T)), \alpha h_1 \rangle + g_i(x^*(T)) < 0$ for all $i \in I$. Furthermore, decreasing $r_2$, if necessary, one can suppose that \begin{equation} \label{VariableEndpoint_DecayDirection}
\max_{|h| \le \eta r_2} \langle \nabla g_i(x^*(T)), \alpha h_1 + h \rangle + g_i(x^*(T)) < 0 \quad \forall i \in I. \end{equation} Observe also that $\nabla g_J(x^*(T))(\alpha h_1 + h_2) = y_2$.
Denote $h_T = \alpha h_1 + h_2$. Arguing in the same way as in the proof of Theorem~\ref{Theorem_LocalExactness_TerminalConstraint} and utilising the complete controllability assumption one can verify that there exists $(h, v) \in X$ such that $$
\dot{h}(t) = A(t) h(t) + B(t) v(t) + \omega(t) \quad \text{for a.e. } t \in (0, T), \quad
h(0) = 0, \quad h(T) = h_T. $$ Consequently, one has $L^d_p(0, T) \times (r_1, + \infty)^{l_1} \times B(\mathbf{0}_{l_2 - l_1}, r_2) \subset K(x^*, u^*)$, where $$
r_1 = \max_{i \in I} \Big(
\max_{|h| \le \eta r_2} \langle \nabla g_i(x^*(T)), \alpha h_1 + h \rangle + g_i(x^*(T)) \Big) < 0 $$ (see~\eqref{VariableEndPoint_RegularityCone} and \eqref{VariableEndpoint_DecayDirection}). Thus, $0 \in \core K(x^*, u^*)$, and the proof is complete. \end{proof}
Let us now show how one can extend Theorem~\ref{Theorem_FixedEndPointProblem_Linear} to the case of variable-endpoint problems. Theorem~\ref{Theorem_Exactness_EvolutionEquations} can be extended to the case of variable-endpoint problems for linear evolution equations in a similar way. Let, as in the proof of the previous theorem, $g_J(\cdot) = (g_{l_1 + 1}(\cdot), \ldots g_{l_2}(\cdot))^T$, and denote by $\mathcal{R}_I(x_0, T) = \{ \xi \in \mathcal{R}(x_0, T) \mid g_i(\xi) \le 0, \: i \in I \}$ the set of all those reachable points that satisfy the terminal inequality constraints.
\begin{theorem} Let $q \ge p$, the functions $g_i$, $i \in I$ and the set $U$ be convex, the functions $g_k$, $k \in J$ be affine, and the following assumptions be valid: \begin{enumerate} \item{$f(x, u, t) = A(t) x + B(t) u$ for some $A(\cdot) \in L_{\infty}^{d \times d}(0, T)$ and $B(\cdot) \in L_{\infty}^{d \times m}(0, T)$; }
\item{the function $\zeta$ is locally Lipschitz continuous, the function $\theta = \theta(x, u, t)$ is continuous, differentiable in $x$ and $u$, and the functions $\nabla_x \theta$ and $\nabla_u \theta$ are continuous; }
\item{either $q = + \infty$ or the functions $\theta$ and $\nabla_x \theta$ satisfy the growth condition of order $(q, 1)$, while the function $\nabla_u \theta$ satisfies the growth condition of order $(q - 1, q')$; }
\item{there exists a globally optimal solution of problem \eqref{VariableEndPointPenaltyProblem}, and the following Slater condition holds true: $0 \in \relint g_J(\mathcal{R}_I(x_0, T))$ and there exists a feasible point $(\widehat{x}, \widehat{u}) \in \Omega$ such that $g_i(\widehat{x}(T)) < 0$ for all $i \in I$; }
\item{there exist $\lambda_0 > 0$, $c > \mathcal{I}^*$ and $\delta > 0$ such that the set $S_{\lambda_0}(c) \cap \Omega_{\delta}$ is bounded in $W^d_{1, p}(0, T) \times L_q^m(0, T)$, and the function $\Phi_{\lambda_0}(x, u)$ is bounded below on $A$. } \end{enumerate} Then there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ the penalty function $\Phi_{\lambda}$ for problem \eqref{VariableEndPointPenaltyProblem} is completely exact on $S_{\lambda}(c)$. \end{theorem}
\begin{proof} Arguing in almost the same way as in the proof of Theorem~\ref{Theorem_FixedEndPointProblem_Linear} and utilising Theorem~\ref{Theorem_CompleteExactness} one obtains that it is sufficient to check that there exists $a > 0$ such that $\varphi^{\downarrow}_A(x, u) \le - a$ for all $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$. Fix any such $(x, u)$, and define $I_+(x, u) = \{ i \in I \mid g_i(x(T)) > 0 \}$. Let us consider two cases.
\textbf{Case~I.} Suppose that $I_+(x, u) \ne \emptyset$. Define $(\Delta x, \Delta u) = (\widehat{x} - x, \widehat{u} - u)$, where $(\widehat{x}, \widehat{u})$ is from Slater's condition. Observe that $(x + \alpha \Delta x, u + \alpha \Delta u) \in A$ for any $\alpha \in [0, 1]$ due to the convexity of the set $U$ and the linearity of the system. Furthermore, by virtue of the convexity of the functions $g_i$, $i \in I$, for any $\alpha \in [0, 1]$ one has \begin{equation} \label{ConvexityTerminalConstr}
g_i(x(T) + \alpha \Delta x(T)) \le \alpha g_i(\widehat{x}(T)) + (1 - \alpha) g_i(x(T))
\le \alpha \eta + (1 - \alpha) g_i(x(T)), \quad
\eta = \max_{i \in I} g_i(\widehat{x}(T)) < 0. \end{equation} Consequently, for any $i \notin I_+(x, u)$ one has $g_i(x(T) + \alpha \Delta x(T)) < 0$ for all $\alpha \in [0, 1]$ by \eqref{ConvexityTerminalConstr}, while for any $i \in I_+(x, u)$ one has $g_i(x(T) + \alpha \Delta x(T)) \ge 0$ for any sufficiently small $\alpha$ due to the fact that a convex function defined on a finite dimensional space is continuous in the interior of its effective domain (see, e.g. \cite[Theorem~3.5.3]{IoffeTihomirov}). Moreover, $g_J(x(T) + \alpha \Delta x(T)) = (1 - \alpha) g_J(x(T))$, since the functions $g_k$, $k \in J$, are affine and $g_J(\widehat{x}(T)) = 0$ (recall that $(\widehat{x}, \widehat{u}) \in \Omega$). Hence with the use of \eqref{ConvexityTerminalConstr} one obtains that \begin{multline*}
\varphi^{\downarrow}_A(x, u) \le \liminf_{\alpha \to +0}
\frac{\varphi(x + \alpha \Delta x, u + \alpha \Delta u)}{\alpha \| (\Delta x, \Delta u) \|_X} \\
= \frac{1}{\| (\Delta u, \Delta x) \|_X} \liminf_{\alpha \to + 0}
\frac{1}{\alpha} \bigg( \sum_{i \in I_+(x(T))} \big( g_i(x(T) + \alpha \Delta x(T)) - g_i(x(T)) \big)
+ \sum_{k \in J} \big( (1 - \alpha)|g_k(x(T))| - |g_k(x(T))| \big) \bigg) \\
\le \frac{1}{\| (\Delta u, \Delta x) \|_X} \Big( \sum_{i \in I_+(x(T))} \big( \eta - g_i(x(T)) \big)
- \sum_{k \in J} |g_k(x(T))| \Big)
\le \frac{\eta}{\| (\Delta u, \Delta x) \|_X}. \end{multline*} From the fact that the set $S_{\lambda_0}(c) \cap \Omega_{\delta}$ is bounded it follows that there exists $C > 0$
(independent of $(x, u)$) such that $\| (\Delta u, \Delta x \|_X = \| (\widehat{x} - x, \widehat{u} - u) \|_X \le C$. Thus, $\varphi^{\downarrow}_A(x, u) \le \eta / C < 0$, and the proof of the first case is complete.
\textbf{Case~II.} Let now $I_+(x, u) = \emptyset$. Note that $g_J(x(T)) \ne 0$, since $(x, u) \notin \Omega$. Choose any $(\widetilde{x}, \widetilde{u}) \in \Omega$ and define $(\Delta x, \Delta u) = (\widetilde{x} - x, \widetilde{u} - u)$. Then, as in the first case, for any $\alpha \in [0, 1]$ one has $(x + \alpha \Delta x, u + \alpha \Delta u) \in A$, and $$
g_i(x(T) + \alpha \Delta x(T)) \le \alpha g_i(\widetilde{x}(T)) + (1 - \alpha) g_i(x(T)) \le 0
\quad \forall i \in I $$ due to the convexity of the functions $g_i$ and the fact that $I_+(x, u) = \emptyset$. In addition, $g_J(x(T) + \alpha \Delta x(T)) = (1 - \alpha) g_J(x(T))$ for all $\alpha \in [0, 1]$, since the functions $g_k$, $k \in J$, are affine and $(\widetilde{x}, \widetilde{u}) \in \Omega$. Therefore \begin{align*}
\varphi^{\downarrow}_A(x, u) \le \liminf_{\alpha \to +0}
\frac{\varphi(x + \alpha \Delta x, u + \alpha \Delta u)}{\alpha \| (\Delta x, \Delta u) \|_X}
&= \frac{1}{\| (\Delta u, \Delta x) \|_X} \liminf_{\alpha \to + 0}
\sum_{k \in J} \Big( (1 - \alpha)|g_k(x(T))| - |g_k(x(T))| \Big) \\
&= - \frac{1}{\| (\Delta u, \Delta x) \|_X} \sum_{k \in J} |g_k(x(T))| \le
- \frac{|g_J(x(T))|}{\| (\Delta u, \Delta x) \|_X} < 0. \end{align*} Thus, it remains to show that there exists $C > 0$ such that for any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ such that $I_+(x, u) = \emptyset$ one can find $(\widetilde{x}, \widetilde{u}) \in \Omega$ satisfying the inequality \begin{equation} \label{ErrorBoundAffineTerminalConstrWeak}
\| (\Delta x, \Delta u) \|_X = \| x - \widetilde{x} \|_{1, p} + \| u - \widetilde{u} \|_q
\le C |g_J(x(T))|. \end{equation} Then $\varphi^{\downarrow}_A(x, u) \le - 1 / C$, and the proof is complete.
As was shown in the proof of Theorem~\ref{Theorem_FixedEndPointProblem_Linear}, there exists $L > 0$ (depending only on
$A(\cdot)$, $B(\cdot)$, $T$, $p$, and $q$) such that $\| x_1 - x_2 \|_{1, p} \le L \| u_1 - u_2 \|_q$ for any $(x_1, u_1), (x_2, u_2) \in A$. Therefore, instead of \eqref{ErrorBoundAffineTerminalConstrWeak} it sufficient to prove the validity of the inequality \begin{equation} \label{ErrorBoundAffineTerminalConstr}
\| u - \widetilde{u} \|_q \le C |g_J(x(T))|. \end{equation}
Moreover, from \eqref{SobolevImbedding} and the inequality $\| x_1 - x_2 \|_{1, p} \le L \| u_1 - u_2 \|_q$ it follows that the map $u \mapsto x_u(T)$ is Lipschitz continuous, where $x_u$ is a solution of the system $\dot{x}_u(\cdot) = A(\cdot) x_u(\cdot) + B(\cdot) u(\cdot)$ such that $x_u(0) = x_0$. Hence with the use of the fact that the functions $g_i$ are convex and continuous one obtains that the set $U_I = \{ u \in U \mid g_i(x_u(T)) \le 0, \: i \in I \}$ is closed and convex.
Let us prove inequality \eqref{ErrorBoundAffineTerminalConstr} with the use of Robinson's theorem. Note that the function $g_J(\cdot) - g_J(0)$ is linear, since the functions $g_k$, $k \in J$, are affine. Define the linear operator $\mathcal{T} \colon L^m_q(0, T) \to \mathbb{R}^{l_2 - l_1}$, $\mathcal{T} v = g_J(h(T)) - g_J(0)$, where $h$ is a solution of the differential equation $$
\dot{h}(t) = A(t) h(t) + B(t) v(t), \quad h(0) = 0, \quad t \in [0, T]. $$ As was shown in the proof of Theorem~\ref{Theorem_FixedEndPointProblem_Linear}, the mapping $v \mapsto h(T)$ is continuous, which implies that the linear operator $\mathcal{T}$ is bounded.
Fix any $(x_*, u_*) \in \Omega$. By definition for all $(x, u) \in A$ one has $\dot{x}(t) - \dot{x}_*(T) = A(t) (x(t) - x_*(t)) + B(t) (u(t) - u_*(t))$ for a.e. $t \in [0, T]$, $x(0) - x_*(0) = 0$, $g_J(x_*(T)) = 0$, and $$
\mathcal{T}(u - u_*) = g_J(x(T) - x_*(T)) - g_J(0)
= g_J(x(T)) - g_J(0) - (g_J(x_*(T)) - g_J(0)) = g_J(x(T)). $$ Therefore $g_J(\mathcal{R}_I(x_0, T)) = \mathcal{T}(U_I - u_*)$. Define $X_0 = \cl\linhull(U_I - u_*)$ and $Y_0 = \linhull \mathcal{T}(U_I - u_*)$. Note that $Y_0$ is a closed subspace of $\mathbb{R}^{l_2 - l_1}$ and $\mathcal{T}(X_0) = Y_0$. Finally, introduce the operator $\mathcal{T}_0 \colon X_0 \to Y_0$, $\mathcal{T}_0(v) = \mathcal{T}(v)$ for all $v \in X_0$. Clearly, $\mathcal{T}_0$ is a bounded linear operator between Banach spaces. Moreover, by Slater's condition $0 \in \relint g_J(\mathcal{R}_I(x_0, T)) = \interior \mathcal{T}(U_I - u_*)$. Consequently, by Robinson's theorem (Theorem~\ref{Theorem_Robinson_Ursescu} with $C = U_I - u_*$, $x^* = 0$, and $y = y^* = 0$) there exists $\kappa > 0$ such that $$
\dist\big( u - u_*, \mathcal{T}_0^{-1}(0) \cap (U_I - u_*) \big) \le
\kappa \big( 1 + \| u - u_* \|_q \big) \Big| \mathcal{T}_0(u - u_*) \Big|
\quad \forall u \in U_I. $$ Recall that $\mathcal{T}_0(u - u_*) = g_J(x(T))$, since $(x_*, u_*) \in \Omega$. Thus, for any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ such that $I_+(x, u) = \emptyset$ one can find $v \in U_I - u_*$ such that $\mathcal{T}_0(v) = 0$ and $$
\big\| u - u_* - v \big\|_q \le 2 \kappa \big( 1 + \| u - u_* \|_q \big) \big| g_J(x(T)) \big|. $$ Define $\widetilde{u} = u_* + v$, and let $\widetilde{x}$ be the corresponding solution of the original system, i.e. $(\widetilde{x}, \widetilde{u}) \in A$. Then $g_i(\widetilde{x}(T)) \le 0$ for all $i \in I$, since $\widetilde{u} \in U_I$, and $g_J(\widetilde{x}(T)) = \mathcal{T} (\widetilde{u} - u_*) = \mathcal{T}(v) = 0$, i.e. $g_J(\widetilde{x}(T)) = 0$ and $(\widetilde{x}, \widetilde{u}) \in \Omega$. Moreover, one has
$\| u - \widetilde{u} \|_q \le 2 \kappa ( 1 + \| u - u_* \|_q ) | g_J(x(T))|$. By our assumption the set $S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ is bounded. Therefore there exists
$C > 0$ such that $2 \kappa ( 1 + \| u - u_* \|_q ) \le C$ for any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ such that $I_+(x, u) = \emptyset$. Thus, for all such $(x, u)$ there exists $(\widetilde{x}, \widetilde{u}) \in \Omega$ satisfying \eqref{ErrorBoundAffineTerminalConstr}, and the proof is complete. \end{proof}
\begin{remark} Note that in the case when there are no equality constraints Slater's condition takes an especially simple form. Namely, it is sufficient suppose that there exists a feasible point $(\widehat{x}, \widehat{u}) \in \Omega$ such that $g_i(\widehat{x}(T)) < 0$ for all $i \in I$. \end{remark}
\begin{remark} Let us briefly discuss how one can extend Theorem~\ref{Theorem_FixedEndPointProblem_NonLinear} to the case of nonlinear variable-endpoint problems. In the case when there are no equality constraints and the inequality constraints are differentiable, one has to replace the negative tangent angle property with the assumption that there exist $\delta > 0$ and $\beta > 0$ such that for any $\xi \in \mathcal{R}(x_0, T)$ satisfying the inequalities $0 < \sum_{i \in I} \max\{ g_i(\xi), 0 \} < \delta$ one can find a sequence $\{ \xi_n \} \subset \mathcal{R}(x_0, T)$ converging to $\xi$ such that $$
\left\langle \nabla g_i(\xi), \frac{\xi_n - \xi}{|\xi_n - \xi|} \right\rangle \le - \beta
\quad \forall n \in \mathbb{N} \quad \forall i \in I \colon g_i(\xi) \ge 0. $$ Then arguing in essentially the same way as in the proof of Theorem~\ref{Theorem_FixedEndPointProblem_NonLinear} one can show that the penalty function $\Phi_{\lambda}$ for problem \eqref{VariableEndPointPenaltyProblem} is completely exact on $S_{\lambda}(c)$ for any sufficiently large $\lambda$. In the general case, a similar but more cumbersome assumption must be imposed on both equality and inequality constraints. \end{remark}
\section{Exact Penalisation of Pointwise State Constraints} \label{Sect_ExactPen_StateConstraint}
Let us now turn to the analysis of the exactness of penalty functions for optimal control problems with pointwise state constraints. In this case the situation is even more complicated than in the case of problems with terminal constraints. It seems that verifiable sufficient conditions for the complete exactness of a penalty function for problems with state constraints can be obtained either under very stringent assumptions on the controllability of the system or in the case of linear systems and convex state constraints. Furthermore, a penalty term for state constraints can be designed with the use of the $L^p$-norm with any $1 \le p \le + \infty$. The smooth norms with $1 < p < + \infty$ and the $L^1$-norm are more appealing for practical applications, while, often, one can guarantee exact penalisation of state constraints only in the case $p = + \infty$.
\subsection{A Counterexample}
We start our analysis of state constrained problems with a simple counterexample that illuminates the difficulties of designing exact penalty functions for state constraints. It also demonstrates that in the case when the functional $\mathcal{I}(x, u)$ explicitly depends on control it is apparently impossible to define an exact penalty function for problems with state \textit{equality} constraints within the framework adopted in our study.
\begin{example} \label{CounterExample_StateEqConstr} Let $d = 2$, $m = 1$, and $p = q = 2$. Define $U = \{ u \in L^2(0, T) \mid u(t) \in [-1, 1] \text{ for a.e. } t \in (0, T) \}$, and consider the following fixed-endpoint optimal control problem with state equality constraint: \begin{align*}
&\min \: \mathcal{I}(u) = - \int_0^T u(t)^2 dt \\
&\text{s.t.} \quad \begin{cases} \dot{x}^1 = 1 \\ \dot{x}^2 = u \end{cases} \quad t \in [0, T], \quad
x(0) = \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \quad
x(T) = \begin{pmatrix} T \\ 0 \end{pmatrix}, \quad
u \in U, \quad g(x(t)) \equiv 0, \end{align*} where $g(x^1, x^2) = x^2$. The only feasible point of this problem is $(x^*, u^*)$ with $x^*(t) \equiv (t, 0)^T$ and $u^*(t) = 0$ for a.e. $t \in [0, T]$. Thus, $(x^*, u^*)$ is a globally optimal solution of this problem.
We would like to penalise the state equality constraint $g(x(t)) = x^2(t) = 0$. One can define the penalty term in one of the following ways: $$
\varphi(x) = \Big( \int_0^T |g(x(t))|^r \, dt \Big)^{1/r}, \quad 1 \le r < + \infty, \quad
\varphi(x) = \max_{t \in [0, T]} |g(x(t))|, \quad
\varphi(x) = \int_0^T |g(x(t))|^{\alpha} \, dt, \quad 0 < \alpha < 1. $$ Clearly, all these functions are continuous with respect to the uniform metric, which by inequality \eqref{SobolevImbedding} implies that they are continuous on $W^d_{1,p}(0, T)$. Therefore, instead of choosing a particular function $\varphi$, we simply suppose that $\varphi \colon W^d_{1,p}(0, T) \to [0, + \infty)$ is an arbitrary function, continuous with respect to the uniform metric, and such that $\varphi(x) = 0$ if and only if $g(x(t)) \equiv 0$. One can consider the penalised problem \begin{equation} \label{Ex_PenalizedStatEqConstr} \begin{split}
&\min \: \Phi_{\lambda}(x, u) = - \int_0^T u(t)^2 dt + \lambda \varphi(x) \\
&\text{s.t.} \quad \begin{cases} \dot{x}^1 = 1 \\ \dot{x}^2 = u \end{cases} \quad t \in [0, T], \quad
x(0) = \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \quad
x(T) = \begin{pmatrix} T \\ 0 \end{pmatrix}, \quad
u \in U. \end{split} \end{equation} Observe that the goal function $\mathcal{I}$ is Lipschitz continuous on any bounded subset of $L^2(0, T)$ by \cite[Proposition~4]{DolgopolikFominyh}, and the set $$
A = \big\{ (x, u) \in X \mid x(0) = (0, 0)^T, \: x(T) = (T, 0)^T, \: u \in U, \:
\dot{x}^1 = 1, \: \dot{x}^2 = u \text{ for a.e. } t \in [0, T] \big\} $$ is obviously closed in $X = W^2_{1, 2}(0, T) \times L^2(0, T)$. Consequently, penalised problem \eqref{Ex_PenalizedStatEqConstr} fits the framework of Section~\ref{Sect_ExactPenaltyFunctions}. However, the penalty function $\Phi_{\lambda}$ is not exact regardless of the choice of the penalty term $\varphi$.
Indeed, arguing by reductio ad absurdum, suppose that $\Phi_{\lambda}$ is globally exact. Then there exists $\lambda \ge 0$ such that $\Phi_{\lambda}(x, u) \ge \Phi_{\lambda}(x^*, u^*)$ for all $(x, u) \in A$. For any $n \in \mathbb{N}$ define $$
u_n(t) = \begin{cases}
1, & \text{if } t \in \left[\frac{T(2k - 2)}{2n}, \frac{T(2k - 1)}{2n} \right), \: k \in \{ 1, 2, \ldots, n \}, \\
-1, & \text{if } t \in \left[ \frac{T(2k - 1)}{2n}, \frac{Tk}{n} \right), \: k \in \{ 1, 2, \ldots, n \},
\end{cases} $$ i.e. $u_n$ takes alternating values $\pm 1$ on the segments of length $T / 2n$. For the corresponding trajectory $x_n$ one has $x(0) = (0, 0)^T$, $x(T) = (T, 0)^T$ (i.e. $(x_n, u_n) \in A$), and
$\| x_n^2 \|_{\infty} = T/2n$. Therefore, $\varphi(x_n) \to 0$ as $n \to \infty$ due to the continuity of the function $\varphi$ with respect to the uniform metric. On the other hand, $\mathcal{I}(u_n) = -T$ for all $n \in \mathbb{N}$, which implies that $\Phi_{\lambda}(x_n, u_n) \to - T$ as $n \to \infty$. Consequently, $\Phi_{\lambda}(x_n, u_n) < 0 = \Phi_{\lambda}(x^*, u^*)$ for any sufficiently large $n \in \mathbb{N}$, which contradicts our assumption. Thus, the penalty function $\Phi_{\lambda}$ is not globally exact for any penalty term $\varphi$ that is continuous with respect to the uniform metric. \end{example}
The previous example might lead one to think that linear penalty functions for state constrained optimal control problems cannot be exact. Our aim is to show that in some cases exact penalisation of state constraints (especially, state inequality constraints) is nevertheless possible, but one must utilise the highly nonsmooth $L^{\infty}$-norm to achieve exactness. Furthermore, we demonstrate that exact $L^p$-penalisation with finite $p$ is possible in the case when either the problem is convex and Lagrange multipliers corresponding to state constraints belong to $L^{p'}(0, T)$ or the functional $\mathcal{I}(x, u)$ does not depend on the control inputs explicitly.
\subsection{Linear Evolution Equations}
We start with the convex case, i.e. with the case when the controlled system is linear and state inequality constraints are convex. The convexity of constraints, along with widely known Slater's conditions from convex optimisation, allows one to prove the complete exactness of $L^{\infty}$-penalty function under relatively mild assumptions. The main results on exact penalty functions in this case can be obtained for both linear time varying systems and linear evolution equations in Hilbert spaces. For the sake of shortness, we consider only evolution equations.
Let, as in Section~\ref{SubSec_EvolEq_TerminalConstr}, $\mathscr{H}$ and $\mathscr{U}$ be complex Hilbert spaces, $\mathbb{T}$ be a strongly continuous semigroup on $\mathscr{H}$ with generator $\mathcal{A} \colon \mathcal{D}(\mathcal{A}) \to \mathscr{H}$, and let $\mathcal{B}$ be an admissible control operator for $\mathbb{T}$. For any $t \ge 0$ denote by $F_t u = \int_0^t \mathbb{T}_{t - \sigma} \mathcal{B} u(\sigma) \, d \sigma$ the input map corresponding to $(\mathcal{A}, \mathcal{B})$. Then, as was pointed out in Section~\ref{SubSec_EvolEq_TerminalConstr}, for any $u \in L^2((0, T); \mathscr{U})$ the initial value problem $\dot{x}(t) = \mathcal{A} x(t) + \mathcal{B} u(t)$, $x(0) = x_0$ with $x_0 \in \mathscr{H}$ has a unique solution $x \in C([0, T]; \mathscr{H})$ given by \begin{equation} \label{SolutionViaSemiGroup_SC}
x(t) = \mathbb{T}_t x_0 + F_t u \quad \forall t \in [0, T]. \end{equation} Consider the following fixed-endpoint optimal control problem with state constraints: \begin{equation} \label{EvolEqStateConstrainedProblem} \begin{split}
{}&\min_{(x, u)} \, \mathcal{I}(x, u) = \int_0^T \theta(x(t), u(t), t) \, dt \quad
\text{subject to} \quad \dot{x}(t) = \mathcal{A} x(t) + \mathcal{B} u(t), \quad t \in [0, T], \\
{}&x(0) = x_0, \quad x(T) = x_T, \quad u \in U, \quad
g_j(x(t), t) \le 0 \quad \forall t \in [0, T], \quad j \in J. \end{split} \end{equation} Here $\theta \colon \mathscr{H} \times \mathscr{U} \times [0, T] \to \mathbb{R}$ and $g_j \colon \mathscr{H} \times [0, T] \to \mathbb{R}$, $j \in J = \{ 1, \ldots, l \}$, are given functions, $T > 0$ and $x_0, x_T \in \mathscr{H}$ are fixed, and $U \subseteq L^2((0, T); \mathscr{U})$ is a closed convex set.
Let us introduce a penalty function for problem \eqref{EvolEqStateConstrainedProblem}. Our aim is to penalise the state inequality constraints $g_j(x(t), t) \le 0$. To this end, define $X = C([0, T]; \mathscr{H}) \times L^2((0, T); \mathscr{U})$, $M = \{ (x, u) \in X \mid g_j(x(t), t) \le 0 \quad \forall t \in [0, T], \, j \in J \}$, and $$
A = \Big\{ (x, u) \in X \Bigm| x(0) = x_0, \: x(T) = x_T, \: u \in U, \:
\text{and $\eqref{SolutionViaSemiGroup_SC}$ holds true} \Big\}. $$ Then problem \eqref{EvolEqStateConstrainedProblem} can be rewritten as the problem of minimizing $\mathcal{I}(x, u)$ subject to $(x, u) \in M \cap A$. Introduce the penalty term $\varphi(x, u) = \sup_{t \in [0, T]} \{ g_1(x(t), t), \ldots, g_l(x(t), t), 0 \}$. Then $M = \{ (x, u) \in X \mid \varphi(x, u) = 0 \}$, and one can consider the penalised problem of minimising $\Phi_{\lambda}$ over the set $A$, which is a fixed-endpoint problem without state constraints of the form: \begin{equation*} \begin{split}
{}&\min_{(x, u)} \, \int_0^T \theta(x(t), u(t), t) \, dt
+ \lambda \sup_{t \in [0, T]} \big\{ g_1(x(t), t), \ldots, g_l(x(t), t), 0 \big\} \\
{}&\text{subject to } \dot{x}(t) = \mathcal{A} x(t) + \mathcal{B} u(t), \quad t \in [0, T],
\quad u \in U, \quad x(0) = x_0, \quad x(T) = x_T. \end{split} \end{equation*} Note that after discretisation in $t$ this problem becomes a standard minimax problem with convex constraints, which can be solved via a wide variety of existing numerical methods of minimax optimisation or nonsmooth convex optimisation in the case when the function $(x, u) \mapsto \theta(x, u, t)$ is convex. Our aim is to show that this fixed-endpoint problem is equivalent to problem \eqref{EvolEqStateConstrainedProblem}, provided Slater's condition holds true, i.e. provided there exists a control input $\widehat{u} \in U$ such that for the corresponding solution $\widehat{x}$ (see~\eqref{SolutionViaSemiGroup_SC}) one has $\widehat{x}(T) = x_T$ and $g_j(\widehat{x}(t), t) < 0$ for all $t \in [0, T]$ and $j \in J$.
\begin{theorem} \label{Theorem_LinEvolEq_StateConstr} Let the following assumptions be valid: \begin{enumerate} \item{$\theta$ is continuous, and for any $R > 0$ there exist $C_R > 0$ and an a.e. nonnegative function
$\omega_R \in L^1(0, T)$ such that $| \theta(x, u, t) | \le C_R \| u \|_{\mathscr{U}}^2 + \omega_R(t)$ for all
$x \in \mathscr{H}$, $u \in \mathscr{U}$, and $t \in (0, T)$ such that $\| x \|_{\mathscr{H}} \le R$; \label{Assumpt_LinEvolEq_SC_ThetaGrowth}}
\item{either the set $U$ is bounded in $L^2((0, T), \mathscr{U})$ or there exist $C_1 > 0$ and $\omega \in L^1(0, T)$
such that $\theta(x, u, t) \ge C_1 \| u \|_{\mathscr{U}}^2 + \omega(t)$ for all $x \in \mathscr{H}$, $u \in \mathscr{U}$, and for a.e. $t \in [0, T]$; }
\item{$\theta$ is differentiable in $x$ and $u$, the functions $\nabla_x \theta$ and $\nabla_u \theta$ are continuous, and for any $R > 0$ there exist $C_R > 0$, and a.e. nonnegative functions $\omega_R \in L^1(0, T)$ and $\eta_R \in L^2(0, T)$ such that $$
\| \nabla_x \theta(x, u, t) \|_{\mathscr{H}} \le C_R \| u \|_{\mathscr{U}}^2 + \omega_R(t), \quad
\| \nabla_u \theta(x, u, t) \|_{\mathscr{U}} \le C_R \| u \|_{\mathscr{U}} + \eta_R(t) $$
for all $x \in \mathscr{H}$, $u \in \mathscr{U}$ and $t \in (0, T)$ such that $\| x \|_{\mathscr{H}} \le R$; }
\item{there exists a globally optimal solution of problem \eqref{EvolEqStateConstrainedProblem}; \label{Assumpt_LinEvolEq_SC_GlobSol}}
\item{the functions $g_j(x, t)$, $j \in J$, are convex in $x$, continuous jointly in $x$ and $t$, and Slater's condition holds true. \label{Assumpt_LinEvolEq_StateConstr}} \end{enumerate} Then for all $c \in \mathbb{R}$ there exists $\lambda^*(c) \ge 0$ such that for any $\lambda \ge \lambda^*(c)$ the penalty function $\Phi_{\lambda}$ for problem \eqref{EvolEqStateConstrainedProblem} is completely exact on the set $S_{\lambda}(c)$. \end{theorem}
\begin{proof} Almost literally repeating the first part of the proof of Theorem~\ref{Theorem_Exactness_EvolutionEquations} one obtains that the assumptions on the function $\theta$ and its derivatives ensure that the functional $\mathcal{I}(x, u)$ is Lipschitz continuous on any bounded subset of $X$, the set $S_{\lambda}(c)$ is bounded in $X$ for all $c \in \mathbb{R}$ and $\lambda \ge 0$, and the penalty function $\Phi_{\lambda}$ is bounded below on $A$. In addition, the set $A$ is closed by virtue of the closedness of the set $U$ and the fact that the input map $F_t$ is a bounded linear operator from $L^2((0, T); \mathscr{U})$ to $\mathscr{H}$ (see~\eqref{SolutionViaSemiGroup_SC}). Finally, the mappings $x \mapsto g_j(x(\cdot), \cdot)$, $j \in J$, and the penalty term $\varphi$ are continuous by Proposition~\ref{Prop_ContNonlinearMap_in_C} and Corollary~\ref{Corollary_StateConstrPenTerm_Contin} (see Appendix~B).
Fix any $\lambda \ge 0$ and $c \in \mathbb{R}$. By applying Theorem~\ref{Theorem_CompleteExactness} one gets that it is remains to verify that there exists $a > 0$ such that $\varphi^{\downarrow}_A(x, u) \le - a$ for any $(x, u) \in S_{\lambda}(c) \cap (\Omega_{\delta} \setminus \Omega)$ (i.e. $(x, u) \in S_{\lambda}(c)$ and $0 < \varphi(x, u) < \delta$).
Fix any $\delta > 0$ and $(x, u) \in S_{\lambda}(c) \cap (\Omega_{\delta} \setminus \Omega)$, and let a pair $(\widehat{x}, \widehat{u})$ be from Slater's condition. Denote
$\sigma = \| (\widehat{x}, \widehat{u}) - (x, u) \|_X =
\| \widehat{x} - x \|_{C([0, T]; \mathscr{H})} + \| \widehat{u} - u \|_{L^2((0, T); \mathscr{U})}$. Note that there exists $R > 0$ (independent of $(x, u)$) such that $\sigma \le R$, since the set $S_{\lambda}(c)$ is bounded. Furthermore, $\sigma > 0$, since $(\widehat{x}, \widehat{u}) \in \Omega$ by definition.
Define $\Delta x = (\widehat{x} - x) / \sigma$ and $\Delta u = (\widehat{u} - u) / \sigma$. Observe that
$\| (\Delta x, \Delta u) \|_X = 1$, and $(x + \alpha \Delta x, u + \alpha \Delta u) \in A$ for any $\alpha \in [0, \sigma]$ due to the convexity of the set $U$ and the linearity of the system $\dot{x} = \mathcal{A} x + \mathcal{B} u$. With the use of the convexity of the functions $g_j(x, t)$ in $x$ one obtains that $$
g_j(x(t) + \alpha \Delta x(t), t) \le \frac{\alpha}{\sigma} g_j(\widehat{x}(t), t)
+ \left( 1 - \frac{\alpha}{\sigma} \right) g_j(x(t), t)
\le \frac{\alpha \eta}{\sigma} + \left( 1 - \frac{\alpha}{\sigma} \right) \varphi(x, u)
\le \frac{\alpha \eta}{\sigma} + \varphi(x, u). $$ for any $\alpha \in [0, \sigma]$, $t \in [0, T]$, and $j \in J$, where $\eta = \max_{t \in [0, T]}\{ g_j(\widehat{x}(t), t) \mid j \in J \}$ (note that $\eta < 0$ due to Slater's condition). Therefore, one has $$
\max_{t \in [0, T]} \big\{ g_1(x(t) + \alpha \Delta x(t), t), \ldots, g_l(x(t) + \alpha \Delta x(t), t) \big\}
\le \frac{\alpha \eta}{\sigma} + \varphi(x, u)
\quad \forall \alpha \in [0, \sigma]. $$ Recall that $\varphi(x, u) > 0$, since $(x, u) \notin \Omega$. Consequently, bearing in mind the fact that
the right-hand side of the inequality above is positive for any $\alpha < \varphi(x, u) \sigma / |\eta|$ one obtains that $$
\varphi(x + \alpha \Delta x, u + \alpha \Delta u) =
\max_{t \in [0, T], j \in J} \big\{ g_j(x(t) + \alpha \Delta x(t), t), 0 \big\}
\le \frac{\alpha \eta}{\sigma} + \varphi(x, u)
\quad \forall \alpha \in \left[ 0, \min\left\{ \sigma, \frac{\varphi(x, u) \sigma}{|\eta|} \right\} \right). $$ Dividing this inequality by $\alpha$ and passing to the limit superior as $\alpha \to + 0$ one finally gets that $$
\varphi^{\downarrow}_A(x, u)
\le \limsup_{\alpha \to +0} \frac{\varphi(x + \alpha \Delta x, u + \alpha \Delta u)}{\alpha}
\le \frac{\eta}{\sigma} \le \frac{\eta}{R} < 0 $$ where both $\eta$ and $R$ are independent of $(x, u) \in S_{\lambda}(c) \cap (\Omega_{\delta} \setminus \Omega)$. Thus, $\varphi^{\downarrow}_A(x, u) \le - \eta / R$ for any $(x, u) \in S_{\lambda}(c) \cap (\Omega_{\delta} \setminus \Omega)$, and the proof is complete. \end{proof}
\begin{corollary} \label{Corollary_LinEvolEq_StateConstr} Let all assumptions of Theorem~\ref{Theorem_LinEvolEq_StateConstr} be valid. Suppose also that either the set $U$ is bounded in $L^2((0, T); \mathscr{U})$ or the function $(x, u) \mapsto \theta(x, u, t)$ is convex for all $t \in [0, T]$. Then the penalty function $\Phi_{\lambda}$ for problem \eqref{EvolEqStateConstrainedProblem} is completely exact on $A$. \end{corollary}
\begin{proof} If the set $U$ is bounded, then by the first part of the proof of Theorem~\ref{Theorem_Exactness_EvolutionEquations_Global} the set $A$ is bounded in $X$. Therefore, arguing in the same way as in the proof of Theorem~\ref{Theorem_LinEvolEq_StateConstr}, but replasing the set $S_{\lambda}(c)$ with $A$ and utilising Theorem~\ref{THEOREM_COMPLETEEXACTNESS_GLOBAL} instead of Theorem~\ref{Theorem_CompleteExactness} we arrive at the required result.
If the function $(x, u) \mapsto \theta(x, u, t)$ is convex, then, as was shown in the proof of Theorem~\ref{Theorem_Exactness_EvolutionEquations_Global}, the penalty function $\Phi_{\lambda}$ for problem \eqref{EvolEqStateConstrainedProblem} is completely exact on $A$ if and only if it is globally exact. It remains to note that its global exactness follows from Theorem~\ref{Theorem_LinEvolEq_StateConstr}. \end{proof}
\begin{remark} Theorem~\ref{Theorem_LinEvolEq_StateConstr} and corollary~\ref{Corollary_LinEvolEq_StateConstr} can be easily extended to the case of problems with inequality constraints of the form $g_j(x, u) \le 0$, where
$g_j \colon C([0, T]; \mathscr{H}) \times L^2((0, T); \mathscr{U}) \to \mathbb{R}$ are continuous convex functions. In particular, one can consider the integral constraint $\| u \|_{L^2((0, T); \mathscr{U})} \le C$ for some $C > 0$. In this case one can define $\varphi(x, u) = \max\{ g_1(x, u), \ldots g_l(x, u), 0 \}$, while Slater's condition takes the form: there exists a feasible point $(\widehat{x}, \widehat{u})$ such that $g_j(\widehat{x}, \widehat{u}) < 0$ for all $j$. \end{remark}
\begin{remark} It should be noted that Theorem~\ref{Theorem_LinEvolEq_StateConstr} and Corollary~\ref{Corollary_LinEvolEq_StateConstr} can be applied to problems with distributed $L^{\infty}$ state constraints. For instance, suppose that $\mathscr{H} = W^{1, 2}(0, 1)$, and let the constraints have the form $b_1 \le x(t, r) \le b_2$ for all $t \in [0, T]$ and a.e. $r \in (0, 1)$. Then one can define $g_1(x(t)) = \esssup_{r \in (0, 1)} x(t, r) - b_2$ and $g_2(x(t)) = \esssup_{r \in (0, 1)} (- x(t, r)) + b_1$ and consider the state constraints $g_1(x(\cdot)) \le 0$ and $g_2(x(\cdot)) \le 0$. One can easily check that both functions $g_1$ and $g_2$ are convex and continuous. Slater's condition in this case takes the form: there exists $(\widehat{x}, \widehat{u}) \in \Omega$ such that $b_1 + \varepsilon \le \widehat{x}(t, r) \le b_2 - \varepsilon$ for some $\varepsilon > 0$, for all $t \in [0, T]$, and a.e. $r \in (0, 1)$. \end{remark}
Observe that Slater's condition imposes some restriction on the initial and final states. Namely, Slater's conditions implies that $g_j(x_0, 0) < 0$ and $g_j(x_T, T) < 0$ for all $j \in J$ (in the general case only the inequalities $g_j(x_0, 0) \le 0$ and $g_j(x_T, T) \le 0$ hold true). Let us give an example demonstrating that in the case when the strict inequalities are not satisfied, the penalty function $\Phi_{\lambda}$ for problem \eqref{EvolEqStateConstrainedProblem} need not be exact. For the sake of simplicity, we consider a free-endpoint finite dimensional problem. As one can readily verify, Theorem~\ref{Theorem_LinEvolEq_StateConstr} remains valid in the case of free-endpoint problems.
\begin{example} \label{CounterExample_StateInEqConstr} Let $d = 2$, $m = 1$, $p = q = 2$. Define
$U = \{ u \in L^2(0, T) \mid u(t) \ge 0 \text{ for a.e. } t \in (0, T), \: \| u \|_2 \le 1 \}$, and consider the following free-endpoint optimal control problem with the state inequality constraint: $$
\min \: \mathcal{I}(u) = - \int_0^T u(t)^2 dt \quad
\text{s.t.} \quad \begin{cases} \dot{x}^1 = 1 \\ \dot{x}^2 = u \end{cases} \quad t \in [0, T], \quad
x(0) = \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \quad u \in U, \quad g(x(t)) \le 0, $$ where $g(x^1, x^2) = x^2$. The only feasible point of this problem is $(x^*, u^*)$ with $x^*(t) \equiv (t, 0)^T$ and $u^*(t) = 0$ for a.e. $t \in [0, T]$. Thus, $(x^*, u^*)$ is a globally optimal solution of this problem. Note also that the function $\theta(x, u, t) = - (u)^2$ satisfies the assumptions of Theorem~\ref{Theorem_LinEvolEq_StateConstr}. Furthermore, in this case the set $A$ is obviously bounded in $X$, and the penalty function $\Phi_{\lambda}(x, u) = \mathcal{I}(u) + \lambda \varphi(x)$ with $\varphi(x) = \max_{t \in [0, T]} \{ g(x(t)), 0 \}$ is bounded below on $A$.
Observe that $g(x(0)) = 0$, which implies that Slater's condition does not hold true. Let us check that the penalty function $\Phi_{\lambda}$ is not exact. Arguing by reductio ad absurdum, suppose that $\Phi_{\lambda}$ is globally exact. Then there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ and $(x, u) \in A$ one has $\Phi_{\lambda}(x, u) \ge \Phi_{\lambda}(x^*, u^*)$. For any $n \in \mathbb{N}$ define $u_n(t) = n$, if
$t \in [0, 1 / n^2]$, and $u_n(t) = 0$, if $t > 1 / n^2$. Then $\| u_n \|_2 = 1$, and $(x_n, u_n) \in A$, where $x_n(t) = (t, \min\{ nt, 1 / n \})^T$ is the corresponding trajectory of the system. Observe that $\mathcal{I}(u_n) = -1$ and $\varphi(x_n) = 1 / n$ for any $n \in \mathbb{N}$. Consequently, $\Phi_{\lambda}(x_n, u_n) < 0 = \Phi_{\lambda}(x^*, u^*)$ for any sufficiently large $n \in \mathbb{N}$, which contradicts our assumption. Thus, the penalty function $\Phi_{\lambda}$ is not globally exact. Moreover, one can easily see that the penalty function $\Phi_{\lambda}(x) = \mathcal{I}(u) + \lambda \varphi(x)$ is not globally exact for any penalty term $\varphi$ that is continuous with respect to the uniform metric. \end{example}
In the case when not only the state constraints but also the cost functional $\mathcal{I}$ are convex, one can utilise the convexity of the problem to prove that the exact $L^p$-penalisation of state constraints with any $1 \le p < + \infty$ is possible, provided Lagrange multipliers corresponding to the state constraints are sufficiently regular. Indeed, let $(x^*, u^*)$ be a globally optimal solution of problem \eqref{EvolEqStateConstrainedProblem}, and let $E(x^*, u^*) \subset \mathbb{R} \times (C[0, T])^l$ be a set of all those vectors $(y_0, y_1, \ldots, y_l)$ for which one can find $(x, u) \in A$ such that $\mathcal{I}(x, u) - \mathcal{I}(x^*, u^*) < y_0$ and $g_j(x(t), t) \le y_j(t)$ for all $t \in [0, T]$ and $j \in J$. The set $E(x^*, u^*)$ has nonempty interior due to the fact that $(0, + \infty) \times (C_+[0, T])^l \subset E(x^*, u^*)$ (put $(x, u) = (x^*, u^*)$), where $C_+[0, T]$ is the cone of nonnegative functions. Observe also that $0 \notin E(x^*, u^*)$, since otherwise one can find a feasible point $(x, u)$ of problem \eqref{EvolEqStateConstrainedProblem} such that $\mathcal{I}(x, u) < \mathcal{I}(x^*, u^*)$, which contradicts the definition of $(x^*, u^*)$. Furthermore, with the use of the convexity of $\mathcal{I}$ and $g_j$ one can easily check that the set $E(x^*, u^*)$ is convex. Therefore, by applying the separation theorem (see, e.g. \cite[Theorem~V.2.8]{DunfordSchwartz}) one obtains that there exist $\mu_0 \in \mathbb{R}$ and continuous linear functionals $\psi_j$ on $C[0, T]$, $j \in J$, not all zero, such that $\mu_0 y_0 + \sum_{j = 1}^l \psi_j(y_j) \ge 0$ for all $(y_0, y_1, \ldots, y_l) \in E(x^*, u^*)$. Taking into account the fact that $(0, + \infty) \times (C_+[0, T])^l \subset E(x^*, u^*)$ one obtains that $\mu_0 \ge 0$ and $\psi_j(y) \ge 0$ for any $y \in C_+[0, T]$ and $j \in J$. Consequently, utilising the Riesz-Markov-Kakutani representation theorem (see~\cite[Theorem~IV.6.3]{DunfordSchwartz}) and bearing in mind the definition of $E(x^*, u^*)$ one gets that there exist regular Borel measures $\mu_j$ on $[0, T]$, $j \in J$, such that \begin{equation} \label{LagrangeMultiplier_StateConstr}
\mu_0 \mathcal{I}(x, u) + \sum_{j = 1}^l \int_{[0, T]} g_j(x(t), t) \, d \mu_j(t) \ge \mu_0 \mathcal{I}(x^*, u^*)
\quad \forall (x, u) \in A. \end{equation} If Slater's condition holds true, then obviously $\mu_0 > 0$, and we suppose that $\mu_0 = 1$. Any collection $(\mu_1, \ldots, \mu_l)$ of regular Borel measures on $[0, T]$ satisfying \eqref{LagrangeMultiplier_StateConstr} with $\mu_0 = 1$ is called \textit{Lagrange multipliers} corresponding to the state constraints of problem \eqref{EvolEqStateConstrainedProblem}. Let us note that one has to suppose that Lagrange multipliers are Borel measures, since if one replaces $C[0, T]$ in the definition of $E(x^*, u^*)$ with $L^p[0, T]$, $1 \le p < + \infty$, then the set $E(x^*, u^*)$, in the general case, has empty interior, which makes the separation theorem inapplicable.
\begin{theorem} Let assumptions \ref{Assumpt_LinEvolEq_SC_ThetaGrowth}, \ref{Assumpt_LinEvolEq_SC_GlobSol}, and \ref{Assumpt_LinEvolEq_StateConstr} of Theorem~\ref{Theorem_LinEvolEq_StateConstr} be valid, and let the function $(x, u) \mapsto \theta(x, u, t)$ be convex for any $t \in [0, T]$. Suppose, in addition, that for some $1 \le p < + \infty$ there exist Lagrange multipliers $(\mu_1, \ldots, \mu_l)$ such that the Borel measures $\mu_j$ are absolutely continuous with respect to the Lebesgue measure, and their Radon-Nikodym derivatives belong to $L^{p'}[0, T]$. Then there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ the penalty function $$
\Phi_{\lambda}(x, u) = \mathcal{I}(x, u)
+ \lambda \sum_{j = 1}^l \bigg( \int_0^T \max\{ g_j(x(t), t), 0 \}^p \, dt \bigg)^{1/p} $$ for problem \eqref{EvolEqStateConstrainedProblem} is completely exact on $A$. \end{theorem}
\begin{proof} Let $(x^*, u^*)$ be a globally optimal solution of problem \eqref{EvolEqStateConstrainedProblem}, and $h_j$ be the Radon-Nikodym derivative of $\mu_j$ with respect to the Lebesgue measure, $j \in J$. Denote
$\lambda_0 = \max_{j \in J} \| h_j \|_{p'}$. Then by applying \eqref{LagrangeMultiplier_StateConstr} and H\"{o}lder's inequality one obtains that \begin{align*}
\Phi_{\lambda}(x^*, u^*) = \mathcal{I}(x^*, u^*)
&\le \mathcal{I}(x, u) + \sum_{j = 1}^l \int_0^T g_j(x(t), t) h_j(t) \, dt
\le \mathcal{I}(x, u) + \sum_{j = 1}^l \int_0^T \max\{ g_j(x(t), t), 0 \} |h_j(t)| \, dt \\
&\le \mathcal{I}(x, u) + \lambda_0 \sum_{j = 1}^l \bigg( \int_0^T \max\{ g_j(x(t), t), 0 \}^p \, dt \bigg)^{1/p}
\le \Phi_{\lambda}(x, u) \end{align*} for any $(x, u) \in A$ and $\lambda \ge \lambda_0$. Hence, as is easy to see, the penalty function $\Phi_{\lambda}$ is globally exact. Now, bearing in mind the convexity of $\Phi_{\lambda}$ and arguing in the same way as in the proof of Theorem~\ref{Theorem_Exactness_EvolutionEquations_Global} one arrives at the required result. \end{proof}
\subsection{Nonlinear Systems: Local Exactness}
Let us now turn to general nonlinear optimal control problems with state constraints of the form: \begin{equation} \label{StateConstrainedProblem} \begin{split}
&\min \: \mathcal{I}(x, u) = \int_0^T \theta(x(t), u(t), t) \, dt \quad
\text{subject to } \dot{x}(t) = f(x(t), u(t), t), \quad t \in [0, T], \\
&x(0) = x_0, \quad x(T) = x_T, \quad u \in U, \quad
g_j(x(t), t) \le 0 \quad \forall t \in [0, T], \: j \in J. \end{split} \end{equation} Here $\theta \colon \mathbb{R}^d \times \mathbb{R}^m \times [0, T] \to \mathbb{R}$, $f \colon \mathbb{R}^d \times \mathbb{R}^m \times [0, T] \to \mathbb{R}^d$, $g_j \colon \mathbb{R}^d \times [0, T] \to \mathbb{R}$, $j \in J = \{ 1, \ldots, l \}$, are given functions, $x_0, x_T \in \mathbb{R}^d$, and $T > 0$ are fixed, $x \in W^d_{1, p}(0, T)$, and $U \subseteq L_q^m(0, T)$ is a closed set of admissible control inputs.
Let $X = W^d_{1, p}(0, T) \times L_q^m(0, T)$. Define $M = \{ (x, u) \in X \mid g_j(x(t), t) \le 0 \text{ for all } t \in [0, T], \: j \in J \}$ and $$
A = \Big\{ (x, u) \in X \mid u \in U, \: x(0) = x_0, \: x(T) = x_T, \:
\dot{x}(t) = f(x(t), u(t), t) \text{ for a.e. } t \in (0, T) \Big\}. $$ Then problem \eqref{StateConstrainedProblem} can be rewritten as the problem of minimising $\mathcal{I}(x, u)$ over the set $M \cap A$. Define $\varphi(x, u) = \sup_{t \in [0, T]} \big\{ g_1(x(t), t), \ldots, g_l(x(t), t), 0 \big\}$. Then $M = \{ (x, u) \in X \mid \varphi(x, u) = 0 \}$, and one can consider the penalised problem of minimising the penalty function $\Phi_{\lambda}$ over the set $A$. Our first goal is to obtain simple sufficient conditions for the local exactness of the penalty function $\Phi_{\lambda}$.
\begin{theorem} \label{Theorem_StateConstr_LocalExact} Let $U = L_q^m(0, T)$, $q \ge p$, and $(x^*, u^*)$ be a locally optimal solution of problem \eqref{StateConstrainedProblem}. Let also the following assumptions be valid: \begin{enumerate} \item{$\theta$ and $f$ are continuous, differentiable in $x$ in $u$, and the functions $\nabla_x \theta$, $\nabla_u \theta$, $\nabla_x f$, and $\nabla_u f$ are continuous; }
\item{either $q = + \infty$ or $\theta$ and $\nabla_x \theta$ satisfy the growth condition of order $(q, 1)$, $\nabla_u \theta$ satisfies the growth condition of order $(q - 1, q')$, $f$ and $\nabla_x f$ satisfy the growth condition of order $(q / p, p)$, and $\nabla_u f$ satisfies the growth condition of order $(q / s, s)$ with $s = qp / (q - p)$ in the case $q > p$, and $\nabla_u f$ does not depend on $u$ in the case $q = p$; }
\item{$g_j$, $j \in J$, are continuous, differentiable in $x$, and the functions $\nabla_x g_j$, $j \in J$, are continuous. } \end{enumerate} Suppose finally that the linearised system \begin{equation} \label{LinearizedSystem}
\dot{h}(t) = A(t) h(t) + B(t) v(t),
\quad A(t) = \nabla_x f(x^*(t), u^*(t), t), \quad B(t) = \nabla_u f(x^*(t), u^*(t), t), \end{equation} is completely controllable using $L^q$-controls in time $T$, $A(\cdot) \in L_{\infty}^{d \times d}(0, T)$, and there exists $v \in L^q(0, T)$ such that the corresponding solution $h$ of \eqref{LinearizedSystem} with $h(0) = 0$ satisfies the condition $h(T) = 0$, and for any $j \in J$ one has \begin{equation} \label{MFCQ_StateConstr}
\langle \nabla_x g_j(x^*(t), t), h(t) \rangle < 0
\quad \forall t \in [0, T] \colon g_j(x^*(t), t) = 0. \end{equation} Then the penalty function $\Phi_{\lambda}$ for problem \eqref{StateConstrainedProblem} is locally exact at $(x^*, u^*)$. \end{theorem}
\begin{proof} By \cite[Propositions~3 and 4]{DolgopolikFominyh} the growth conditions on the function $\theta$ and its derivatives ensure that the functional $\mathcal{I}$ is Lipschitz continuous in any bounded neighbourhood of $(x^*, u^*)$. Introduce a nonlinear operator $F \colon X \to L_p^d(0, T) \times \mathbb{R}^d \times (C[0, T])^l$ and a closed convex set $K \subset L_p^d(0, T) \times \mathbb{R}^d \times (C[0, T])^l$ as follows: $$
F(x, u) = \begin{pmatrix} \dot{x}(\cdot) - f(x(\cdot), u(\cdot), \cdot) \\ x(T) \\ g(x(\cdot), \cdot) \end{pmatrix},
\quad K = \begin{pmatrix} 0 \\ x_T \\ (C_-[0, T])^l \end{pmatrix} $$ Here $(C[0, T])^l$ is the Cartesian product of $l$ copies of the space $C[0, T]$ of real-valued continuous functions defined on $[0, T]$ endowed with the uniform norm, $g(\cdot) = (g_1(\cdot), \ldots g_l(\cdot))^T$, and $C_-[0, T] \subset C[0, T]$ is the cone of nonpositive functions. Our aim is to apply Theorem~\ref{Theorem_LocalErrorBound} with $C = \{ (x, u) \in X \mid x(0) = x_0 \}$ to the operator $F$. Then one obtains that there exists $a > 0$ such that $\dist(F(x, u), K) \ge a \dist( (x, u), F^{-1}(K) \cap C)$ for any $(x, u) \in C$ in a neighbourhood of $(x^*, u^*)$. Consequently, taking into account the facts that the set $F^{-1}(K) \cap C$ coincides with the feasible region of problem \eqref{StateConstrainedProblem}, and $$
\dist(F(x, u), K) = \sum_{j = 1}^l \max_{t \in [0, T]}\big\{ g_j(x(t), t), 0 \big\} \le l \varphi(x, u)
\quad \forall (x, u) \in A $$ one obtains that $\varphi(x, u) \ge (a / l) \dist((x, u), \Omega)$ for any $(x, u) \in A$ in a neighbourhood of $(x^*, u^*)$. Hence by applying Theorem~\ref{Theorem_LocalExactness} we arrive at the required result.
By Theorems~\ref{Theorem_DiffNemytskiiOperator} and \ref{Theorem_DiffStateConstr} (see Appendix~B) the growth conditions on the function $f$ and its derivative guarantee that the mapping $F$ is strictly differentiable at $(x^*, u^*)$, and its Fr\'{e}chet derivative at this point has the form $$
DF(x^*, u^*)[h, v] =
\begin{pmatrix} \dot{h}(\cdot) - A(\cdot) h(\cdot) - B(\cdot) v(\cdot) \\ h(T) \\
\nabla_x g(x^*(\cdot), \cdot)h(\cdot) \end{pmatrix}, $$ where $A(\cdot)$ and $B(\cdot)$ are defined in \eqref{LinearizedSystem}. Observe also that $C - (x^*, u^*) = \{ (h, v) \in X \mid h(0) = 0 \}$, since $x^*(0) = x_0$. Consequently, the regularity condition \eqref{MetricRegCond} from Theorem~\ref{Theorem_LocalErrorBound} takes the form $0 \in \core K(x^*, u^*)$ with \begin{equation} \label{RegularityCone_StateConstr}
K(x^*, u^*) = \left\{ \begin{pmatrix} \dot{h}(\cdot) - A(\cdot) h(\cdot) - B(\cdot) v(\cdot) \\ h(T) \\
\nabla_x g(x^*(\cdot), \cdot) h(\cdot) \end{pmatrix} -
\begin{pmatrix} 0 \\ 0 \\ (C_-[0, T])^l - g(x^*(\cdot), \cdot) \end{pmatrix} \Biggm| (h, v) \in X, \: h(0) = 0
\right\}, \end{equation} Let us check that this condition is satisfied. Indeed, define $X_0 = \{ (h, v) \in X \mid h(0) = 0 \}$, and introduce the linear operator $E \colon X_0 \to L_p^d(0, T)$, $E(h, v) = \dot{h}(\cdot) - A(\cdot) h(\cdot) - B(\cdot) v(\cdot)$. This operator is surjective and bounded, since the linear differential equation $E(h, 0) = w$ has a unique solution for any $w \in L^d_p(0, T)$ by \cite[Theorem~1.1.3]{Filippov}, and by H\"{o}lder's inequality one has $$
\| E(h, v) \|_p \le \| \dot{h} \|_p + \| A(\cdot) \|_\infty \| h \|_{p} + \| B(\cdot) \|_s \| v \|_q
\le C \| (h, v) \|_X, $$
where $C = \max\{ 1 + \| A(\cdot) \|_{\infty}, \| B(\cdot) \|_s \}$, and $s = + \infty$ in the case $q = p$
(note that $\| B(\cdot) \|_s$ is finite due to the growth condition on $\nabla_u f$; see the proof of Theorem~\ref{Theorem_DiffNemytskiiOperator}). Consequently, by the open mapping theorem there exists $\eta_1 > 0$ such that $$
\dist((h, v), E^{-1}(w)) \le \eta_1 \| w - E(h, v) \|_p
\quad \forall (h, v) \in X_0, \: w \in L_p^d(0, T) $$ (see~\cite[formula~$(0.2)$]{Ioffe}). Taking $(h, v) = (0, 0)$ in the previous inequality one gets that for any $w \in L_p^d(0, T)$ there exists $v_1 \in L_q^m(0, T)$ such that the solution $h_1$ of the pertubed linearised equation \begin{equation} \label{PerturbedLinearizedEquation}
\dot{h}_1(t) = A(t) h_1(t) + B(t) v_1(t) + w(t), \quad h(0) = 0, \quad t \in [0, T] \end{equation}
satisfies the inequality $\| (h_1, v_1) \|_X \le (\eta_1 + 1) \| w \|_p$.
Introduce the operator $\mathcal{T} \colon L_q^m(0, T) \to \mathbb{R}^d$, $\mathcal{T} v = h(T)$, where $h$ is a solution of \eqref{LinearizedSystem} with the initial condition $h(0) = 0$. Arguing in a similar way to the proof of Theorem~\ref{Theorem_FixedEndPointProblem_Linear} (recall that $A(\cdot) \in L_{\infty}^{d \times d}(0, T)$) one can check that the operator $\mathcal{T}$ is bounded, while the complete controllability assumption implies that it is surjective. Hence by the open mapping theorem there exists $\eta_2 > 0$ such that $$
\dist(v, \mathcal{T}^{-1}(h_T)) \le \eta_2 | h_T - \mathcal{T}(v) |
\quad \forall v \in L_q^m(0, T), \: h_T \in \mathbb{R}^d. $$ Taking $v = 0$ one obtains that for any $h_T \in \mathbb{R}^d$ there exists $v_2 \in L_q^m(0, T)$ such that
$\| v_2 \|_q \le (\eta_2 + 1) |h_2(T)|$, where $h_2$ is a solution of \eqref{LinearizedSystem} with $v = v_2$ satisfying the conditions $h_2(0) = 0$ and $h_2(T) = h_T$. Furthermore, by applying the Gr\"{o}nwall-Bellman and H\"{o}lder's inequalities, and the fact that $$
|h_2(t)| \le \| B(\cdot) \|_s \| v_2 \|_q + \| A(\cdot) \|_{\infty} \int_0^t |h_2(\tau)| \, d \tau
\quad \forall t \in [0, T] $$
one can verify that $\| h_2 \|_{1, p} \le L \| v_2 \|_q$ for some $L > 0$ (see Remark~\ref{Remark_SensitivityProperty}). Therefore there exists $\eta_3 > 0$ such that for any $h_T \in \mathbb{R}^d$ one can find $v_2 \in L_q^m(0, T)$
satisfying the inequality $\| (h_2, v_2) \|_X \le \eta_3 |h(T)|$, where $h_2$ is a solution of \eqref{LinearizedSystem} with $v = v_2$ such that $h_2(0) = 0$ and $h_2(T) = h_T$.
Choose $r_1, r_2 > 0$, $w \in L_p^d(0, T)$ with $\| w \|_p \le r_1$, and $h_T \in \mathbb{R}^d$ with $|h_T| \le r_2$. As we proved earlier, there exists $(h_1, v_1) \in X$ satisfying \eqref{PerturbedLinearizedEquation} and such that
$\| (h_1, v_1) \|_X \le (\eta_1 + 1) \| w \|_p \le (\eta_1 + 1) r_1$. By inequality~\eqref{SobolevImbedding} one has
$\| h_1 \|_{\infty} \le C_p \| h_1 \|_{1, p} \le C_p (\eta_1 + 1) r_1$ for some $C_p > 0$ independent of $h_1$. Furthermore, there exists $(h_2, v_2) \in X_0$ satisfying \eqref{LinearizedSystem}, and such that $h(0) = 0$,
$h_2(T) = h_T - h_1(T)$, and $\| (h_2, v_2) \|_X \le \eta_3 |h_T - h_1(T)|$. Hence, in particular, one gets that $$
\| h_2 \|_{\infty} \le C_p \eta_3 |h_T - h_1(T)| \le C_p \eta_3 |h_T| + C_p \eta_3 \| h_1 \|_{\infty}
\le C_p \eta_3 r_2 + C_p^2 \eta_3 (\eta_1 + 1) r_1. $$ Finally, by our assumption there exists $(h_3, v_3) \in X_0$ satisfying \eqref{LinearizedSystem}, \eqref{MFCQ_StateConstr} and such that $h_3(T) = 0$. For any $j \in J$ denote $T_j = \{ t \in [0, T] \mid g_j(x^*(t), t) = 0 \}$. Clearly, the sets $T_j$ are compact, which implies that for any $j \in J$ there exists $\beta_j > 0$ such that $\langle \nabla_x g_j(x^*(t), t), h_3(t) \rangle \le - \beta_j$ for all $t \in T_j$ due to \eqref{MFCQ_StateConstr} and the continuity of the functions $\nabla_x g_j$, $x^*$, and $h_3$. With the use of the compactness of the sets $T_j$ one obtains that for any $j \in J$ there exists a set $\mathcal{O}_j \subset [0, T]$ such that $\mathcal{O}_j$ is open in $[0, T]$, $T_j \subset \mathcal{O}_j$, and $\langle \nabla_x g_j(x^*(t), t), h_3(t) \rangle \le - \beta_j / 2$ for all $t \in \mathcal{O}_j$. On the other hand, for any $j \in J$ there exists $\gamma_j > 0$ such that $g_j(x^*(t), t) \le - \gamma_j$ for any $t \in [0, T] \setminus \mathcal{O}_j$, since by definition $g_j(x^*(t), t) < 0$ for all $t \notin T_j$ and the set $[0, T] \setminus \mathcal{O}_j$ is compact.
Note that for any $\alpha > 0$ the pair $(\alpha h_3, \alpha v_3)$ belongs to $X_0$ and satisfies \eqref{LinearizedSystem} and the equality $\alpha h_3(T) = 0$. Choosing a sufficiently small $\alpha > 0$ one can suppose that $\langle \nabla_x g_j(x^*(t), t), \alpha h_3(t) \rangle < \gamma_j$ for all $t \in [0, T] \setminus \mathcal{O}_j$ and $j \in J$, while for any $t \in \mathcal{O}_j$ one has $\langle \nabla_x g_j(x^*(t), t), \alpha h_3(t) \rangle \le - \alpha \beta_j / 2$. Thus, replacing $(h_3, v_3)$ with $(\alpha h_3, \alpha v_3)$, where $\alpha > 0$ is small enough, one can suppose that $$
\langle \nabla_x g_j(x^*(t), t), h_3(t) \rangle + g_j(x^*(t), t) < 0
\quad \forall t \in [0, T] \quad \forall j \in J. $$ With the use of the continuity of $g_j$, $\nabla_x g_j$, $x^*$, and $h_3$ one obtains that there exists $r_3 > 0$ such that $$
\langle \nabla_x g_j(x^*(t), t), h_3(t) \rangle + g_j(x^*(t), t) \le - r_3
\quad \forall t \in [0, T] \quad \forall j \in J. $$ Choosing $r_1 > 0$ and $r_2 > 0$ sufficiently small one gets that for any $j \in J$ \begin{equation} \label{NegConstrShift}
\Big\langle \nabla_x g_j(x^*(t), t), h_1(t) + h_2(t) + h_3(t) \Big\rangle
+ g_j(x^*(t), t) \le - \frac{r_3}{2}
\quad \forall t \in [0, T], \end{equation}
since $\| h_1 \|_{\infty}$ and $\| h_2 \|_{\infty}$ can be made arbitrarily small by a proper choice of $r_1$ and $r_2$.
Define $h = h_1 + h_2 + h_3$ and $v = v_1 + v_2 + v_3$. Then $(h, v) \in X$, $h(0) = 0$, $h(T) = h_T$, $(h, v)$ satisfies \eqref{PerturbedLinearizedEquation}, and \eqref{NegConstrShift} holds true. Therefore, $(w, h_T, y)^T \in K(x^*, u^*)$ for any $y = (y_1, \ldots, y_l)^T \in (C[0, T])^l$ such that
$\| y_j \|_{\infty} \le r_3 / 2$ for all $j \in J$ (see \eqref{RegularityCone_StateConstr}). In other words, $B(0, r_1) \times B(0, r_2) \times B(0, r_3 / 2) \subset K(x^*, u^*)$, i.e. $0 \in \interior K(x^*, u^*)$, and the proof is complete. \end{proof}
\begin{remark} {(i)~From \eqref{MFCQ_StateConstr} it follows that $g_j(x_0, 0) < 0$ and $g_j(x_T, T) < 0$ for all $j \in J$, since $h(0) = h(T) = 0$ in \eqref{MFCQ_StateConstr}. Furthermore, the assumption that there exists a control input $v$ such that the corresponding solution $h$ of the linearised system satisfies \eqref{MFCQ_StateConstr} is, roughly speaking, equivalent to the assumption that there exists $(h, v) \in X$ such that for any sufficiently small $\alpha \ge 0$ the point $(x_{\alpha}, u_{\alpha}) = (x + \alpha h + r_1(\alpha), u + \alpha v + r_2(\alpha))$ is feasible for problem \eqref{StateConstrainedProblem} for some $(r_1(\alpha), r_2(\alpha)) \in X$ such that
$\| (r_1(\alpha), r_2(\alpha)) \|_X / \alpha \to 0$ as $\alpha \to +0$, and $g_j(x_{\alpha}(t), t) < 0$ for all $t \in [0, T]$, $j \in J$ and for any sufficiently small $\alpha$. Thus, assumption \eqref{MFCQ_StateConstr} is, in essence, a local version of Slater's condition in the nonconvex case. }
\noindent{(ii)~It should be noted that in the case when there is no terminal constraint the complete controllability assumption and the assumptions that the equality $h(T) = 0$ holds true for $h$ satisfying \eqref{MFCQ_StateConstr} can be dropped from Theorem~\ref{Theorem_StateConstr_LocalExact}. }
\noindent{(iii)~One might want to use the cone $L^r(0, T)_- = \{ x \in L^r(0, T) \mid x(t) \le 0 \text{ for a.e. } t \in (0, T) \}$ instead of $C_-[0, T]$ in the proof of Theorem~\ref{Theorem_StateConstr_LocalExact} in order verify the local exactness of the penalty function for problem \eqref{StateConstrainedProblem} with the penalty term $\varphi(x, u) = \sum_{i = 1}^l ( \int_0^T \max\{ g_j(x(t), t), 0 \}^r \, dt )^{1/r}$, $1 \le r < + \infty$. However, note that the cone $L^r(0, T)_-$ has empty algebraic interior, and for that reason an attempt to apply Theorem~\ref{Theorem_LocalErrorBound} leads to incompatible assumptions on the state constraints and the linearised system. Indeed, in this case the regularity condition \eqref{MetricRegCond} from Theorem~\ref{Theorem_LocalErrorBound} takes the form $$
0 \in \core \left\{ \begin{pmatrix} \dot{h}(\cdot) - A(\cdot) h(\cdot) - B(\cdot) v(\cdot) \\ h(T) \\
\nabla_x g(x^*(\cdot), \cdot) h(\cdot) \end{pmatrix} -
\begin{pmatrix} 0 \\ 0 \\ (L^r(0, T)_-)^l - g(x^*(\cdot), \cdot) \end{pmatrix} \Biggm| (h, v) \in X, \: h(0) = 0
\right\}. $$ Hence, in particular, $0 \in \core K_0(x^*)$, where $K_0(x^*)$ is the union of the cones $\{ \nabla_x g(x^*(\cdot), \cdot) h(\cdot) + g(x^*(\cdot), \cdot) \} - (L^r(0, T)_-)^l$ with $h$ being a solution of \eqref{LinearizedSystem} for some $v \in L^m_q(0, T)$ such that $h(0) = h(T) = 0$. However, for the function $y(t) = - t^{1/2r}$ one obviously has $y \in L^r(0, T)$ and $\alpha y \notin K_0(x^*)$ for any $\alpha > 0$ (for the sake of simplicity we assume that $l = 1$), since the function $\nabla_x g(x^*(\cdot), \cdot) h(\cdot) + g(x^*(\cdot), \cdot)$ is continuous and $h(0) = 0$. Thus, $0 \notin \core K_0(x^*)$, and Theorem~\ref{Theorem_LocalErrorBound} cannot be applied. } \end{remark}
\subsection{Nonlinear Systems: Complete Exactness}
Now we turn to the derivation of sufficient conditions for the complete exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{StateConstrainedProblem}. As in the case of terminal constraints, the derivation of easily verifiable conditions for exact penalisation of pointwise state constraints does not seem possible in the nonlinear case. Therefore, our main goal, once again, is not to obtain easily verifiable conditions, but to understand what kind of general properties the nonlinear system and state constraints must possess to ensure exact penalisation. To this end, we directly apply Theorem~\ref{Theorem_CompleteExactness} in order to obtain general sufficient conditions for complete exactness. Then we consider a particular case in which one can obtain more readily verifiable sufficient conditions for the complete exactness of the penalty function.
Recall that \textit{the contingent cone} to a subset $C$ of a normed space $Y$ at a point $x \in C$, denoted by $K_C(x)$, consists of all those vectors $v \in Y$ for which there exist sequences $\{ v_n \} \subset Y$ and $\{ \alpha_n \} \subset (0, + \infty)$ such that $v_n \to v$ and $\alpha_n \to 0$ as $n \to \infty$, and $x + \alpha_n v_n \in C$ for all $n \in \mathbb{N}$. It should be noted that in the case $U = L^m_q(0, T)$ by the Lyusternik-Graves theorem (see, e.g. \cite{Ioffe}) for any $(x, u) \in A$ one has \begin{align*}
K_A(x, u) = \Big\{ &(h, v) \in X \Bigm| \\
&\dot{h}(t) = \nabla_x f(x(t), u(t), t) h(t) + \nabla_u f(x(t), u(t), t) v(t) \text{ for a.e. } t \in (0, T),
h(0) = h(T) = 0 \Big\} \end{align*} provided the linearised system is completely controllable, and the assumptions of Theorem~\ref{Theorem_DiffNemytskiiOperator} hold true. In the case when there is no terminal constraint, the complete controllability assumption and the condition $h(T) = 0$ are redundant.
For any $(x, u) \in X$ denote $\phi(x, t) = \max_{j \in J} \max\{ g_j(x, t), 0 \}$. Then $\varphi(x, u) = \max_{t \in [0, T]} \phi(x(t), t)$. Define $T(x) = \{ t \in [0, T] \mid \phi(x(t), t) = \varphi(x, u) \}$ and $J(x, t) = \{ j \in J \mid g_j(x(t), t) = \varphi(x, u) \}$. Clearly, $J(x, t) \ne \emptyset$ iff $t \in T(x)$.
Let $\mathcal{I}^*$ be the optimal value of problem \eqref{StateConstrainedProblem}. Note that the set $\Omega_{\delta} = \{ (x, u) \in A \mid \varphi(x, u) < \delta \}$ consists of all those trajectories $x(\cdot)$ of the system $\dot{x} = f(x, u, t)$, $u \in U$, $x(0) = x_0$, $x(T) = x_T$, which satisfy the perturbed state constraints $g_j(x(t), t) < \delta$ for all $t \in [0, T]$ and $j \in J$.
\begin{theorem} \label{Theorem_StateConstrainedProblem_Nonlinear} Let the following assumptions be valid: \begin{enumerate} \item{$\theta$ is continuous and differentiable in $x$ and $u$, the functions $g_j$, $j \in J$, are continuous, differentiable in $x$, and the functions $\nabla_x \theta$, $\nabla_u \theta$, $\nabla_x g_j$, and $f$ are continuous; }
\item{either $q = + \infty$ or $\theta$ and $\nabla_x \theta$ satisfy the growth condition of order $(q, 1)$, while $\nabla_u \theta$ satisfies the growth condition of order $(q - 1, q')$; }
\item{there exists a globally optimal solution of problem \eqref{StateConstrainedProblem};}
\item{there exist $\lambda_0 > 0$, $c > \mathcal{I}^*$, and $\delta > 0$ such that the set $S_{\lambda_0}(c) \cap \Omega_{\delta}$ is bounded in $X$, and the function $\Phi_{\lambda_0}$ is bounded below on $A$; \label{Assumpt_StateConstr_SublevelBounded} }
\item{there exists $a > 0$ such that for any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ one can find $(h, v) \in K_A(x, u)$ such that for any $t \in T(x)$ one has \begin{equation} \label{MFCQ_StateConstr_Nonlocal}
\langle \nabla_x g_j(x(t), t), h(t) \rangle \le - a \| (h, v) \|_X \quad \forall j \in J(x, t). \end{equation} \label{Assumpt_StateConstr_Decay} } \end{enumerate} Then there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ the penalty function $\Phi_{\lambda}$ for problem \eqref{StateConstrainedProblem} is completely exact on $S_{\lambda}(c)$. \end{theorem}
\begin{proof} By \cite[Propositions~3 and 4]{DolgopolikFominyh} the functional $\mathcal{I}$ is Lipschitz continuous on any bounded open set containing the set $S_{\lambda_0}(c) \cap \Omega_{\delta}$ due to the growth conditions on the function $\theta$ and its derivatives. Arguing in the same way as in the proof of Theorem~\ref{Theorem_FixedEndPointProblem_NonLinear} one can easily verify that the continuity of the function $f$ along with the closedness of the set $U$ ensure that the set $A$ is closed. The continuity of the penalty term $\varphi$ on $X$ follows from Corollary~\ref{Corollary_StateConstrPenTerm_Contin} (see Appendix~B).
Thus, by Theorem~\ref{Theorem_CompleteExactness} it is sufficient to check that there exists $a > 0$ such that $\varphi^{\downarrow}_A(x, u) \le - a$ for all $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$. Our aim is to show that assumption~\ref{Assumpt_StateConstr_Decay} ensures the validity of this inequality.
Fix any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$, and let $(h, v) \in K_A(x, u)$ be from assumption~\ref{Assumpt_StateConstr_Decay}. Then by the definition of contingent cone there exist sequences $(h_n, v_n) \subset X$ and $\{ \alpha_n \} \subset (0, + \infty)$ such that $(h_n, v_n) \to (h, v)$ and $\alpha_n \to 0$ as $n \to \infty$, and for all $n \in \mathbb{N}$ one has $(x + \alpha_n h_n, u + \alpha_n v_n) \in A$.
Denote $\phi_j(x) = \max_{t \in [0, T]} g_j(x(t), t)$. Observe that $\varphi(x, u) = \max_{j \in J} \phi_j(x)$, since $\varphi(x, u) > 0$ (recall that $(x, u) \notin \Omega$). It is well-known (see, e.g. \cite[Sects.~4.4 and 4.5]{IoffeTihomirov}) that the following equality holds true: $$
\lim_{n \to \infty} \frac{\phi_j(x + \alpha_n h_n) - \phi_j(x)}{\alpha_n}
= \max_{t \in T_j(x)} \langle \nabla_x g_j(x(t), t), h(t) \rangle, \quad
T_j(x) = \{ t \in [0, T] \mid g_j(x(t), t) = \phi_j(x) \}. $$ Hence by the Danskin-Demyanov theorem (see, e.g. \cite[Theorem~4.4.3]{IoffeTihomirov}) one has $$
\lim_{n \to \infty} \frac{\varphi(x + \alpha_n h_n, u + \alpha_n v_n) - \varphi(x, u)}{\alpha_n}
= \max_{j \in J(x)} \max_{t \in T_j(x)} \langle \nabla_x g_j(x(t), t), h(t) \rangle, \quad
J(x) = \{ j \in J \mid \phi_j(x) = \varphi(x, u) \}. $$ Observe that $T(x) = \cup_{j \in J(x)} T_j(x)$, and $j \in J(x, t)$ for some $t \in T(x)$ iff $j \in J(x)$ and $t \in T_j(x)$. Consequently, by applying assumption~\ref{Assumpt_StateConstr_Decay} one obtains that $$
\varphi^{\downarrow}_A(x, u) \le
\lim_{n \to \infty} \frac{\varphi(x + \alpha_n h_n, u + \alpha_n v_n) - \varphi(x, u)}{\alpha_n \| (h_n, v_n) \|_X}
= \frac{1}{\| (h, v) \|_X} \max_{j \in J(x)} \max_{t \in T_j(x)} \langle \nabla_x g_j(x(t), t), h(t) \rangle
\le -a, $$ and the proof is complete. \end{proof}
Clearly, the main assumption ensuring that the penalty function $\Phi_{\lambda}$ for problem \eqref{StateConstrainedProblem} is completely exact is assumption~\ref{Assumpt_StateConstr_Decay}. This assumption can be easily explained in the case $l = 1$, i.e. when there is only one state constraint. Roughly speaking, in this case assumption~\ref{Assumpt_StateConstr_Decay} means that if a trajectory $x(\cdot)$ of the system $\dot{x} = f(x, u, t)$ with $x(0) = x_0$ and $x(T) = x_T$ slighly violates the constraints (i.e. $g_1(x(t), t) < \delta$ for all $t$), then by changing the control input $u$ in such a way that the endpoint conditions $x(0) = x_0$ and $x(t) = x_T$ remain to hold true one must be able to slighly shift the trajectory $x(t)$ in a direction close to $- \nabla_x g_1(x(t), t)$ at those points $t$ for which the constraint violation measure $\phi(x(t), t) = \max\{ 0, g_1(x(t), t) \}$ is the largest. However, note that this shift must be uniform for all $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ in the sense that inequality \eqref{MFCQ_StateConstr_Nonlocal} must hold true for all those $(x, u)$. The validity of this inequality in a neighbourhood of a given point can be verified with the use of the same technique as in the proof of Theorem~\ref{Theorem_StateConstr_LocalExact}. Namely, one can check that in the case $U = L^m_q(0, T)$ inequality \eqref{MFCQ_StateConstr_Nonlocal} holds true in a neighbourhood of a given point $(\widehat{x}, \widehat{u})$, provided there exists a solution $(h, v)$ of the corresponding linearised system such that $h(0) = h(T) = 0$ and $\langle \nabla_x g_j(\widehat{x}(t), t), h(t) \rangle < 0$ for all $t \in T(x)$ and $j \in J(x, t)$. Consequently, the main difficulty in verifying assumption~\ref{Assumpt_StateConstr_Decay} stems from the fact that the validity of inequality \eqref{MFCQ_StateConstr_Nonlocal} must be checked not locally, but on the set $S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$. Let us briefly discuss a particular case in which one can easily verify that assumption~\ref{Assumpt_StateConstr_Decay} holds true.
\begin{example} \label{Example_LTV_SlaterImpliesExactPen} Suppose that the system is linear, i.e. $f(x, u, t) = A(t) x + B(t) u$, the set $U$ of admissible control inputs is convex, the functions $g_j(x, t)$ are convex in $x$, and Slater's condition holds true, i.e. there exists $(\widehat{x}, \widehat{u}) \in A$ such that $g_j(\widehat{x}(t), t) < 0$ for all $ t \in [0, T]$ and $j \in J$. Choose any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$. For any $n \in \mathbb{N}$ define $\alpha_n = 1 / n$, $(h, v) = (\widehat{x} - x, \widehat{u} - u)$, and $(x_n, u_n) = \alpha_n (\widehat{x}, \widehat{u}) + (1 - \alpha_n) (x, u) = (x, u) + \alpha_n (h, v)$. Then $(x_n, u_n) \in A$ for all $n \in \mathbb{N}$ and $(h, v) \in K_A(x, u)$ due to the convexity of the set $U$ and the linearity of the system. Fix any $j \in J$ and $t \in [0, T]$ such that $g_j(x(t), t) \ge 0$. Due to the convexity of $g_j(x, t)$ in $x$ one has \begin{align*}
\langle \nabla_x g_j(x(t), t), \alpha_n h(t) \rangle
&\le g_j(x(t) + \alpha_n h(t), t) - g_j(x(t), t) \\
&\le \alpha_n g_j(\widehat{x}(t), t) + (1 - \alpha_n) g_j(x(t), t) - g_j(x(t), t)
\le \alpha_n g_j(\widehat{x}(t), t) \le \alpha_n \eta, \end{align*} where $\eta = \max_{j \in J} \max_{t \in [0, T]} g_j(\widehat{x}(t), t)$. Note that $\eta < 0$ by Slater's condition. Consequently, for any $t \in T(x)$ and $j \in J(x, t)$ one has $$
\langle \nabla_x g_j(x(t), t), h(t) \rangle \le \frac{\eta}{\| (h, v) \|_X} \| (h, v) \|_X. $$ By assumption~\ref{Assumpt_StateConstr_SublevelBounded} of Theorem~\ref{Theorem_StateConstrainedProblem_Nonlinear} the set $S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ is bounded. Therefore, there exists $C > 0$ such that
$\| (h, v) \|_X = \| (\widehat{x} - x, \widehat{u} - u) \|_X \le C$ for all $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$. Hence assumption~\ref{Assumpt_StateConstr_Decay}
of Theorem~\ref{Theorem_StateConstrainedProblem_Nonlinear} is satisfied with $a = |\eta| / C$. \end{example}
\begin{remark} Apparently, assumption~\ref{Assumpt_StateConstr_Decay} of Theorem~\ref{Theorem_StateConstrainedProblem_Nonlinear} holds true in a much more general case than the case of optimal control problems for linear systems with convex state constraints. In particular, it seems that in the case when $j = 1$ (i.e. there is only one state constraint),
$g_1(x_0, 0) < 0$, $g_1(x_T, T) < 0$, and $\inf |\nabla_x g_1(x, t)| > 0$, where the infimum is taken over all those $t \in [0, T]$ and $x \in \mathbb{R}^d$ for which $0 < g_1(x, t) < \delta$, assumption~\ref{Assumpt_StateConstr_Decay} of Theorem~\ref{Theorem_StateConstrainedProblem_Nonlinear} is satisfied under very mild assumptions on the system. On the other hand, if either initial or terminal states lie on the boundary of the feasible region (i.e. either $g_1(x_0, 0) = 0$ or $g_1(x_T, T) = 0$), then assumptions~\ref{Assumpt_StateConstr_Decay} cannot be satisfied. A detailed analysis of these conditions lies outside the scope of this article, and we leave it as a challenging open problem for future research. \end{remark}
\begin{remark} One can easily extend the proof of Theorem~\ref{Theorem_StateConstrainedProblem_Nonlinear} to the case when
the penalty term $\varphi$ is defined as $\varphi(x, u) = \| \phi(x(\cdot), \cdot) \|_r$ for some $r \in (1, + \infty)$, where $\phi(x, t) = \max_{j \in J} \max\{ g_j(x, t), 0 \}$ (i.e. the state constraints are penalised via the $L^r$-norm). In this case assumption~\ref{Assumpt_StateConstr_Decay} takes the following form: there exists $a > 0$ such that for any $(x, u) \in S_{\lambda_0}(c) \cap (\Omega_{\delta} \setminus \Omega)$ one can find $(h, v) \in K_A(x, u)$ satisfying the inequality \begin{equation} \label{StateConstr_Decay_Impossible}
\frac{1}{\varphi(x, u)^{r - 1}} \int_0^T \phi(x(t), t)^{r - 1}
\max_{j \in J_0(x(t), t)} \langle \nabla_x g_j(x(t), t), h(t) \rangle \, dt \le - a \| (h, v) \|_X, \end{equation} where $J_0(x, t) = \{ j \in J \cup \{ 0 \} \mid g_j(x, t) = \phi(x, t) \}$ and $g_0(x, t) \equiv 0$. However, the author failed to find any optimal control problems for which this assumptions can be verified. \end{remark}
\subsection{Nonlinear Systems: A Different View}
As Examples~\ref{CounterExample_StateEqConstr} and \ref{CounterExample_StateInEqConstr} demonstrate, penalty functions for problems with state constraints may fail to be exact due to the fact that the penalty term $\varphi$, unlike the cost functional $\mathcal{I}(x, u)$, does not depend on the control inputs $u$ explicitly. In the case when $\mathcal{I}$ does not explicitly depend on $u$, one can utilise a somewhat different approach and obtain stronger results on the exactness of penalty functions for state constrained problems. Furthermore, this approach serves as a proper motivation to consider a general theory of exact penalty functions in the \textit{metric} space setting (as it is done in Section~\ref{Sect_ExactPenaltyFunctions}), but not in the normed space setting.
Consider the following variable-endpoint optimal control problem with state inequality constraints: \begin{equation} \label{VariableEndpointProblem} \begin{split}
&\min \: \mathcal{I}(x) = \int_0^T \theta(x(t), t) \, dt + \zeta(x(T)) \quad
\text{subject to} \quad \dot{x}(t) = f(x(t), u(t), t), \quad t \in [0, T], \\
&x(0) = x_0, \quad x(T) \in S_T, \quad u \in U, \quad
g_j(x(t), t) \le 0 \quad \forall t \in [0, T], \: j \in J. \end{split} \end{equation} Here $\theta \colon \mathbb{R}^d \times [0, T] \to \mathbb{R}$, $\zeta \colon \mathbb{R}^d \to \mathbb{R}$, $f \colon \mathbb{R}^d \times \mathbb{R}^m \times [0, T] \to \mathbb{R}^d$, and $g_j \colon \mathbb{R}^d \times [0, T] \to \mathbb{R}$, $j \in J = \{ 1, \ldots, l \}$, are given functions, $x_0 \in \mathbb{R}^d$ and $T > 0$ are fixed, while $S_T \subseteq \mathbb{R}^d$ and $U \subseteq L_q^m(0, T)$ are closed sets. It should be noted that with the use of the standard time scaling transformation time-optimal control problems can be recast as problems of the form \eqref{VariableEndpointProblem}.
We will treat problem \eqref{VariableEndpointProblem} as a variational problem, not as an optimal control one. To this end, fix some $p \in (1, + \infty]$, and define $$
X = \Big\{ x \in (C[0, T])^d \Bigm| \exists u \in U \colon
x(t) = x_0 + \int_0^t f(x(\tau), u(\tau), \tau) \, d \tau \quad \forall t \in [0, T] \Big\}, $$ i.e. $X$ is the set of trajectories of the controlled system under consideration. We equip $X$ with the metric
$d_X(x, y) = \| x - y \|_p + |x(T) - y(T)|$. Define $A = \{ x \in X \mid x(T) \in S_T \}$ and $M = \{ x \in X \mid g_j(x(t), t) \le 0 \: \forall t \in [0, T], \: j \in J \}$. Then problem \eqref{StateConstrainedProblem} can be rewritten as the problem of minimising $\mathcal{I}(x)$, $x \in X$, over the set $M \cap A$. Observe that the set $A$ is closed in $X$ due to the facts that the set $S_T$ is closed, and if a sequence $\{ x_n \}$ converges to some $x$ in the metric space $X$, then $\{ x_n(T) \}$ converges to $x(T)$. Let us also point out simple sufficient conditions for the metric space $X$ to be complete.
\begin{proposition} \label{Prop_CompleteMetricSpace} Let the function $f$ be continuous and $U = \{ u \in L^m_{\infty}(0, T) \mid u(t) \in Q \text{ for a.e. } t \in (0, T) \}$ for some compact convex set $Q \subset \mathbb{R}^m$. Suppose also that the set $f(x, Q, t)$ is convex for all $x \in \mathbb{R}^d$ and $t \in [0, T]$, and for any $u \in U$ a solution of $\dot{x} = f(x, u, t)$ with $x(0) = x_0$ is defined on $[0, T]$. Then $X$ is a complete metric space and a compact subset of $(C[0, T])^d$. \end{proposition}
\begin{proof} Under our assumptions the space $X$ consists of all solutions of the differential inclusion $\dot{x} \in F(x, t)$, $x(0) = x_0$, with $F(x, t) = f(x, Q, t)$ by Filippov's theorem (see, e.g. \cite[Theorem~8.2.10]{AubinFrankowska}). Furthermore, by \cite[Theorem~2.7.6]{Filippov} the set $X$ is compact in $(C[0, T])^d$.
Let $\{ x_n \} \subset X$ be a Cauchy sequence in $X$. Since $X$ is compact in $(C[0, T])^d$, there exists a subsequence $\{ x_{n_k} \}$ uniformly converging to some some $x^* \in X$, which obviously implies that $\{ x_{n_k} \}$ converges to $x^*$ in $X$. Hence with the use of the fact that $x_n$ is a Cauchy sequence in $X$ one can easily check that $x_n$ converges to $x^*$ in $X$. Thus, $X$ is a complete metric space. \end{proof}
Formally introduce the penalty term $$
\varphi(x) = \| \phi(x(\cdot), \cdot) \|_p \quad \forall x \in X, \quad
\phi(x, t) = \max\{ g_1(x, t), \ldots, g_l(x, t), 0 \} \quad \forall x \in \mathbb{R}^d, \: t \in [0, T] $$ (note that here $p$ is the same as in the definition of metric in $X$). Then $M = \{ x \in X \mid \varphi(x) = 0 \}$, and one can consider the penalised problem of minimising the penalty function $\Phi_{\lambda}(x) = \mathcal{I}(x) + \lambda \varphi(x)$ over the set $A$, which in the case $p < + \infty$ can be formally written as follows: \begin{equation} \label{PenalizedProblem_StateConstraints} \begin{split}
{}&\min \: \Phi_{\lambda}(x) = \int_0^T \theta(x(t), t) \, dt +
\lambda \bigg( \int_0^T \max\{ g_1(x(t), t), \ldots, g_l(x(t), t), 0 \}^p \, dt \bigg)^{1/p} + \zeta(x(T)) \\
{}&\text{subject to } x(t) = x_0 + \int_0^t f(x(\tau), u(\tau), \tau) \, d \tau, \quad t \in [0, T], \quad
x(T) \in S_T, \quad u \in U, \quad x \in (C[0, T])^d. \end{split} \end{equation} Note, however, that due to our choice of the space $X$ and the metric in this space the notions of locally optimal solutions/inf-stationary points of this problem (and problem \eqref{VariableEndpointProblem}) are understood in a rather specific sense. In particular, $(x^*, u^*)$ is a locally optimal solution of this problem iff for any feasible point
$(x, u)$ satisfying the inequality $\| x - x^* \|_p + |x(T) - x^*(T)| < r$ for some $r > 0$ one has $\Phi_{\lambda}(x) \ge \Phi_{\lambda}(x^*)$. It should be mentioned that any locally optimal solution/inf-stationary point of problem \eqref{VariableEndpointProblem} (or \eqref{PenalizedProblem_StateConstraints}) in $X$ is also a locally optimal solution/inf-stationary point of problem \eqref{VariableEndpointProblem} (or \eqref{PenalizedProblem_StateConstraints}) in the space $W^d_{1, p}(0, T) \times L^m_q(0, T)$, but the converse statement is not true. In a sense, one can say that our choice of the underlying space $X$ in this section reduces the number of locally optimal solutions/inf-stationary points (and, as a result, leads to the weaker notion of the complete exactness of $\Phi_{\lambda}$ than in the previous section).
Let us derive sufficient conditions for the complete exactness of the penalty function $\Phi_{\lambda}$ for problem \eqref{VariableEndpointProblem}. To conveniently formulate these conditions, define $g_0(x, t) \equiv 0$. Then $\phi(x, t) \equiv \max\{ g_j(x, t) \mid j \in J \cup \{ 0 \} \}$. For any $x \in \mathbb{R}^d$ and $t \in [0, T]$ let $J(x, t) = \{ j \in J \cup \{ 0 \} \mid \phi(x, t) = g_j(x, t) \}$. Finally, suppose that the functions $g_j$ are differentiable in $x$, and define the subdifferential $\partial_x \phi(x, t)$ of the function $x \mapsto \phi(x, t)$ as follows: \begin{equation} \label{SubdiffOfMaxStateConstr}
\partial_x \phi(x, t) = \co\big\{ \nabla_x g_j(x, t) \bigm| j \in J(x, t) \big\}. \end{equation} Let us point out that $\partial_x \phi(x, t)$ is a convex compact set, and $\partial_x \phi(x, t) = \{ 0 \}$, if $g_j(x, t) < 0$ for all $j \in J$.
Denote by $\mathcal{I}^*$ the optimal value of problem \eqref{VariableEndpointProblem}, and recall that $\Omega_{\delta} = \{ x \in A \mid \varphi(x) < \delta \}$. Observe that in the case $p = + \infty$ the set $\Omega_{\delta}$ consists of all those trajectories $x(\cdot)$ of the system that satisfy the perturbed constraints $g_j(x(t), t) < \delta$ for all $t \in [0, T]$ and $j \in J$. In the case $p < + \infty$ the set $\Omega_{\delta}$
consists of all those trajectories $x(\cdot)$ for which there exists $w \in L^p(0, T)$ with $\| w \|_p < \delta$ such that $g_j(x(t), t) \le w(t)$ for all $t \in [0, T]$ and $j \in J$, which implies that at every point $t \in [0, T]$ the violation of the state constraints can be arbitrarily large, i.e. $\phi(x(t), t)$ can be arbitrarily large as long as
$\| \phi(x(\cdot), \cdot) \|_p < \delta$.
To avoid the usage of some complicated and restrictive assumptions on the problem data, we prove the following theorem in the simplest case when the set $X$ is compact in $(C[0, T])^d$. This assumption holds true, in particular, if the assumptions of Proposition~\ref{Prop_CompleteMetricSpace} are satisfied.
\begin{theorem} \label{Theorem_StateConstr_Nonlinear_Alternative} Let $p \in (1, + \infty]$ and the following assumptions be valid: \begin{enumerate} \item{$\zeta$ is locally Lipschitz continuous, $\theta$ and $g_j$, $j \in J$ are continuous, differentiable in $x$, and the functions $\nabla_x \theta$ and $\nabla_x g_j$, $j \in J$, are continuous; }
\item{the set $X$ is compact in $(C[0, T])^d$, and there exists a feasible point of problem \eqref{VariableEndpointProblem}; }
\item{there exist $a > 0$ and $\eta > 0$ such that for any $x \in A \setminus \Omega$ one can find a sequence of trajectories $\{ x_n \} \subseteq A$ converging to $x$ in the space $X$ such that
$|x_n(T) - x(T)| \le \eta \| x_n - x \|_p$ for all $n \in \mathbb{N}$, the sequence
$\{ (x_n - x) / \| x_n - x \|_p \}$ converges to some $h \in L^d_p(0, T)$, and \begin{equation} \label{StateConstr_Decay_Finite_P}
\int_0^T \phi(x(t), t)^{p - 1} \max_{v \in \partial_x \phi(x(t), t)} \langle v, h(t) \rangle \, dt
\le - a \varphi(x)^{p - 1} \end{equation} in the case $1 < p < + \infty$, while \begin{equation} \label{StateConstr_Decay_Weakened_P_Infty}
\langle \nabla_x g_j(x(t), t), h(t) \rangle \le - a \quad
\forall t \in [0, T], \: j \in J \colon \varphi(x) = g_j(x(t), t) \end{equation} in the case $p = + \infty$. \label{Assumpt_StateConstr_Decay_Finite_p} } \end{enumerate} Then the penalty function $\Phi_{\lambda}$ for problem \eqref{VariableEndpointProblem} is completely exact on $A$. \end{theorem}
\begin{proof} The functional $\mathcal{I}$ is obviously continuous with respect to the uniform metric, which with the use of the fact that $X$ is compact in $(C[0, T])^d$ implies that the penalty function $\Phi_{\lambda}$ is bounded below on $X$ for any $\lambda \ge 0$. Moreover, the set $X$ is bounded in $(C[0, T])^d$. Hence by applying the mean value theorem and the fact that the function $\zeta$ is locally Lipschitz continuous one obtains that there exist $L_{\zeta} > 0$ such that \begin{align*}
|\mathcal{I}(x) - \mathcal{I}(y)| &\le \Big| \int_0^T \theta(x(t), t) \, dt - \int_0^T \theta(y(t), t) \, dt \Big|
+ |\zeta(x(T)) - \zeta(y(T))| \\
&\le \int_0^T \sup_{\alpha \in [0, 1]} \big|\nabla_x \theta(x(t) + \alpha (y(t) - x(t)), t)\big| |x(t) - y(t)| \, dt
+ L_{\zeta} |x(T) - y(T)| \\
&\le \max\{ T^{1 / p'} K, L_{\zeta} \} d_X(x, y) \end{align*}
for all $x, y \in X$, where $K = \max\{ |\nabla_x \theta(z, t)| \colon |z| \le R, \: t \in [0, T] \}$ and
$R > 0$ is such that $\| x \|_{\infty} \le R$ for all $x \in X$. Thus, the functional $\mathcal{I}$ is Lipschitz continuous on $X$.
Let a sequence $\{ (x_n, u_n) \}$ of feasible points of problem \eqref{VariableEndpointProblem} be such that $\mathcal{I}(x_n)$ converges to the optimal value $\mathcal{I}^*$ of this problem (recall that we assume that at least one feasible point exists). Since the set $X$ is compact in $(C[0, T])^d$, one can extract a subsequence $\{ x_{n_k} \}$ uniformly converging to some $x^* \in X$. From the uniform convergence, the continuity of the functions $g_j$, and the closedness of the set $S_T$ it follows $x^*(T) \in S_T$ and $g_j(x^*(\cdot), \cdot) \le 0$ for all $j \in J$. Furthermore, by the definition of $X$ there exists $u^* \in U$ such that $x^*(t) = x_0 + \int_0^t f(x^*(\tau), u^*(\tau), \tau) \, d \tau$ for all $t \in [0, T]$, which implies that $(x^*, u^*)$ is a feasible point of problem \eqref{VariableEndpointProblem}. Taking into account the fact that the functional $\mathcal{I}$ is continuous with respect to the uniform metric one obtains that $\mathcal{I}(x^*) = \mathcal{I}^*$. Thus, $(x^*, u^*)$ is a globally optimal solution of problem \eqref{VariableEndpointProblem}, i.e. this problem has a globally optimal solution.
Let us check that the penalty term $\varphi$ is continuous on $X$. Indeed, arguing by reductio ad absurdum, suppose that $\varphi$ is not continuous at some point $x \in X$. Then there exists $\varepsilon > 0$ and a sequence
$\{ x_n \} \subset X$ converging to $x$ in the space $X$ such that $|\varphi(x_n) - \varphi(x)| \ge \varepsilon$ for all $n \in \mathbb{N}$. By applying the compactness of $X$ one obtains that there exists a subsequence $\{ x_{n_k} \}$ uniformly converging to some $\overline{x} \in X$. Clearly, $\{ x_{n_k} \}$ also converges to $\overline{x}$ in the space $X$, which implies that $\overline{x} = x$. Utilising the uniform convergence of $\{ x_{n_k} \}$ to $x$ and the continuity of the functions $g_j$ one can easily prove that $\varphi(x_{n_k}) \to \varphi(x)$ as $k \to \infty$ (see~Corollary~\ref{Corollary_StateConstrPenTerm_Contin}), which contradicts our assumption. Therefore, the penalty term $\varphi$ is continuous on $X$.
Thus, by Theorem~\ref{THEOREM_COMPLETEEXACTNESS_GLOBAL} it remains to check that there exists $a > 0$ such that $\varphi^{\downarrow}_A(x) \le - a$ for all $x \in A \setminus \Omega$. Our aim is to show that this inequality is implied by assumption~\ref{Assumpt_StateConstr_Decay_Finite_p}.
\textbf{The case $p < + \infty$.} Fix any $x \in A \setminus \Omega$, and let $\{ x_n \} \subset A$ and $h$ be from assumption~\ref{Assumpt_StateConstr_Decay_Finite_p}. Define $\alpha_n = \| x_n - x \|_p$ and
$h_n = (x_n - x) / \| x_n - x \|_p$. Then $x_n = x + \alpha_n h_n$. Let us verify that \begin{equation} \label{PenTerm_SC_Hadamard_DD}
\lim_{n \to \infty} \frac{\varphi(x + \alpha_n h_n) - \varphi(x)}{\alpha_n}
= \frac{1}{\varphi(x)^{p - 1}}
\int_0^T \phi(x(t), t)^{p - 1} \max_{v \in \partial_x \phi(x(t), t)} \langle v, h(t) \rangle \, dt. \end{equation}
Then by applying \eqref{StateConstr_Decay_Finite_P} and the inequality $|x_n(T) - x(T)| \le \eta \| x_n - x \|_p$ one gets that $$
\varphi^{\downarrow}_A(x) \le \liminf_{n \to \infty} \frac{\varphi(x_n) - \varphi(x)}{d_X(x_n, x)}
= \liminf_{n \to \infty} \frac{\alpha_n}{d_X(x_n, x)} \frac{\varphi(x + \alpha_n h_n) - \varphi(x)}{\alpha_n}
\le - \frac{a}{1 + \eta}, $$ and the proof is complete.
Instead of proving \eqref{PenTerm_SC_Hadamard_DD}, let us check that \begin{equation} \label{PenTerm_SC_in_pth_Hadamard_DD}
\lim_{n \to \infty} \frac{\varphi(x + \alpha_n h_n)^p - \varphi(x)^p}{\alpha_n}
= \int_0^T \phi(x(t), t)^{p - 1} \max_{v \in \partial_x \phi(x(t), t)} \langle v, h(t) \rangle \, dt. \end{equation} Then taking into account the facts that $\varphi(x) > 0$ (recall that $x \notin \Omega$), and the function $\omega(s) = s^{1/p}$ is differentiable at any point $s > 0$ one obtains that \eqref{PenTerm_SC_Hadamard_DD} holds true.
To prove \eqref{PenTerm_SC_in_pth_Hadamard_DD}, note at first that the multifunction $t \mapsto \partial_x \phi(x(t), t)$ is upper semicontinuous and thus measurable by \cite[Proposition~8.2.1]{AubinFrankowska}, which by \cite[Theorem~8.2.11]{AubinFrankowska} implies that the function $t \mapsto \max_{v \in \partial_x \phi(x(t), t)} \langle v, h(t) \rangle$ is measurable. Arguing by reductio ad absurdum, suppose that \eqref{PenTerm_SC_in_pth_Hadamard_DD} does not hold true. Then there exist $\varepsilon > 0$ and a subsequence $\{ n_k \}$, $k \in \mathbb{N}$, such that \begin{equation} \label{PenTerm_SC_not_HDD}
\left| \frac{\varphi(x + \alpha_{n_k} h_{n_k})^p - \varphi(x)^p}{\alpha_{n_k}} -
\int_0^T \phi(x(t), t)^{p - 1} \max_{v \in \partial_x \phi(x(t), t)} \langle v, h(t) \rangle \, dt \right|
\ge \varepsilon \end{equation} for all $k \in \mathbb{N}$. Since $h_n$ converges to $h$ in $L^d_p(0, T)$, one can find a subsequence of the sequence $\{ h_{n_k} \}$, which we denote once again by $\{ h_{n_k} \}$, that converges to $h$ almost everywhere. Hence by the Danskin-Demyanov theorem (see, e.g. \cite[Theorem~4.4.3]{IoffeTihomirov}) for a.e. $t \in (0, T)$ one has $\lim_{k \to \infty} \omega_k(t) = 0$, where $$
\omega_k(t) = \frac{\phi(x(t) + \alpha_{n_k} h_{n_k}(t), t)^p - \phi(x(t), t)^p}{\alpha_{n_k}} -
\phi(x(t), t)^{p - 1} \max_{v \in \partial_x \phi(x(t), t)} \langle v, h(t) \rangle. $$ With the use of a nonsmooth version of the mean value theorem (see, e.g. \cite[Proposition~2]{Dolgopolik_MCD}) one obtains that for any $k \in \mathbb{N}$ and a.e. $t \in (0, T)$ there exist $\beta_k(t) \in (0, 1)$ and $v_k(t) \in \partial_x \phi(x(t) + \beta_k(t) (x_{n_k}(t) - x(t)), t)$ such that $$
\omega_k(t) = \Big( \phi(x(t) + \beta_k(t) (x_{n_k}(t) - x(t)), t) \Big)^{p - 1} \langle v_k(t), h_{n_k}(t) \rangle
- \phi(x(t), t)^{p - 1} \max_{v \in \partial_x \phi(x(t), t)} \langle v, h(t) \rangle. $$ Consequently, bearing in mind the facts that the set $X$ is compact in $(C[0, T])^d$ and the functions $g_j$ and
$\nabla_x g_j$ are continuous (see~\eqref{SubdiffOfMaxStateConstr}) one obtains that there exists $C > 0$ such that $|\omega_k(t)| \le C |h_{n_k}(t)| + C |h(t)|$ for all $k \in \mathbb{N}$ and a.e. $t \in (0, T)$.
The sequence $\{ h_{n_k} \}$ converges to $h$ in $L^d_p(0, T)$, which by H\"{o}lder's inequality implies that it converges to $h$ in $L^d_1(0, T)$. By the ``only if'' part of Vitali's theorem characterising convergence in $L^p$-spaces (see, e.g. \cite[Theorem~III.6.15]{DunfordSchwartz}) and the absolute continuity of the Lebesgue integral for any $\varepsilon > 0$ one can find $\delta(\varepsilon) > 0$ such that for any Lebesgue measurable set $E \subset [0, T]$ with $\mu(E) < \delta(\varepsilon)$ (here $\mu$ is the Lebesgue measure) one has $\int_E h_{n_k} d \mu < \varepsilon / 2 C$ and $\int_E h d \mu < \varepsilon / 2 C$. Consequently, $\int_E \omega_k d \mu < \varepsilon$, provided $\mu(E) < \delta(\varepsilon)$. Hence bearing in mind the fact
$\omega_k(t) \to 0$ for a.e. $t \in (0, T)$ and passing to the limit with the use of ``if'' part of the Vitali theorem one obtains that $\lim_{k \to \infty} \int_0^T |\omega_k(t)| \, dt = 0$, which contradicts \eqref{PenTerm_SC_not_HDD}. Thus, \eqref{PenTerm_SC_in_pth_Hadamard_DD} holds true, and the proof of the case $p < + \infty$ is complete.
\textbf{The case $p = + \infty$.} The proof of this case coincides with the derivation of the inequality $\varphi^{\downarrow}_A(x, u) \le - a$ within the proof of Theorem~\ref{Theorem_StateConstrainedProblem_Nonlinear}. \end{proof}
\begin{remark} {(i)~Let us note that one can define $$
\varphi(x) = \bigg( \sum_{j = 1}^l \int_0^T \max\{ g_j(x(t), t), 0 \}^p \, dt \bigg)^{1/p}
\quad \text{or} \quad
\varphi(x) = \sum_{j = 1}^l \bigg(\int_0^T \max\{ g_j(x(t), t), 0 \}^p \, dt \bigg)^{1/p},
\quad 1 < p < + \infty, $$ and easily obtain corresponding sufficient conditions for the complete exactness of the penalty function $\Phi_{\lambda}$, which are very similar, but not identical, in all three cases. }
\noindent{(ii)~It should be mentioned that the term $|x(T) - y(T)|$ was introduced into the definition of the metric
$d_X(x, y) = \| x - y \|_p + |x(T) - y(T)|$ in $X$ to ensure that the functional $\mathcal{I}$ is Lipschitz continuous on $A$. In the case of problems with the cost functional of the form $\mathcal{I}(x) = \int_0^T \theta(x(t), t) \, dt$
one can define $d_X(x, y) = \| x - y \|_p$ and drop the inequality $|x_n(T) - x(T)| \le \eta \| x_n - x \|_p$ from assumption~\ref{Assumpt_StateConstr_Decay_Finite_p} of Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative}. Note that the closedness of the set $A = \{ x \in X \mid x(T) \in S_T \}$ in the case when $X$ is equipped with the metric
$d_X(x, y) = \| x - y \|_p$ can be easily proved under the assumption that $X$ is compact in $(C[0, T])^d$, since in this case the topologies on $X$ generated by the metrics $d_X(x, y) = \| x - y \|_p$ and
$d_X(x, y) = \| x - y \|_{\infty}$ coincide. } \end{remark}
At first glance, assumption~\ref{Assumpt_StateConstr_Decay_Finite_p} of the Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} might seem very similar to assumption~\ref{Assumpt_StateConstr_Decay} of Theorem~\ref{Theorem_StateConstrainedProblem_Nonlinear} and inequality \eqref{StateConstr_Decay_Impossible}. In particular, arguing in the same way as in Example~\ref{Example_LTV_SlaterImpliesExactPen} one can check that in the case $p = + \infty$ inequality \eqref{StateConstr_Decay_Weakened_P_Infty} is satisfied, provided the system is linear, the state constraints are convex, and Slater's condition holds true. However, there is one important difference. In assumption~\ref{Assumpt_StateConstr_Decay_Finite_p} of Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} one does not need to care about control inputs corresponding to the sequence $\{ x_n \}$, as well as the derivatives $\dot{x}_n$, which makes this assumption significantly less restrictive, then assumption~\ref{Assumpt_StateConstr_Decay} of Theorem~\ref{Theorem_StateConstrainedProblem_Nonlinear}.
\begin{remark} \label{Remark_ShiftTowardsFeasibleRegion} Let us point out a particular case in which assumption~\ref{Assumpt_StateConstr_Decay_Finite_p} of Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} can be reformulated in a more convenient form. Suppose that $p < + \infty$, $l = 1$ (i.e. there is only one state constraint), and there exist $a_1, a_2 > 0$ such that
$a_1 \le |\nabla_x g_1(x, t)| \le a_2$ for all $t \in [0, T]$ and $x \in \mathbb{R}^d$ satisfying the inequality $g_1(x, t) > 0$. In this case assumption~\ref{Assumpt_StateConstr_Decay_Finite_p} of Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} is satisfied, if there exists $\eta > 0$ such that for any $x \in A \setminus \Omega$ one can find a sequence of trajectories $\{ x_n \} \subset A$ converging to $x$
such that $|x_n(T) - x(T)| \le \eta \| x_n - x \|_p$ for all $n \in \mathbb{N}$, and the sequence
$\{ (x_n - x) / \| x_n - x \|_p \}$ converges to $h = y / \| y \|_p$ with $y(t) = - \phi(x(t), t) \nabla_x g_1(x(t), t)$ for all $t \in [0, T]$. Indeed, by applying the inequalities
$a_1 \le |\nabla_x g_1(x, t)| \le a_2$ and the fact that $\varphi(x) = \| \phi(x(\cdot), \cdot) \|_p$ one obtains $$
\int_0^T \phi(x(t), t)^{p - 1} \max_{v \in \partial_x \phi(x(t), t)} \langle v, h(t) \rangle \, dt
= - \frac{\int_0^T \phi(x(t), t)^p | \nabla_x g_1(x(t), t) |^2 \, dt}{\left( \int_0^T \phi(x(t), t)^p
| \nabla_x g_1(x(t), t) |^p \, dt \right)^{1/p}}
\le - \frac{a_1^2 \varphi(x)^p }{a_2 \varphi(x)} = - \frac{a_1^2}{a_2} \varphi(x)^{p - 1}, $$ i.e. inequality \eqref{StateConstr_Decay_Finite_P} holds true. This assumption, in essence, means that for any trajectory $x$ violating the state constraint $g_1(x(t), t) \le 0$ one has to be able to find a sequence of control inputs that shift the trajectory $x$ along the ray $x_{\alpha}(t) = x(t) - \alpha \phi(x(t), t) \nabla_x g_1(x(t), t)$, $\alpha \ge 0$ (recall that $\phi(x(t), t) = \max\{ g_1(x(t), t), 0 \}$, i.e. the trajectory is shifted only at those points where the state constraint is violated). It is easily seen that for any $t \in [0, T]$ satisfying the inequality $g_1(x(t), t) > 0$ and for any sufficiently small $\alpha > 0$ one has $g_1(x_{\alpha}(t), t) < g_1(x(t), t)$, i.e. $x$ is shifted towards the feasible region. Thus, one can say that assumption~\ref{Assumpt_StateConstr_Decay_Finite_p} of Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} is an assumption on the controllability of the system $\dot{x} = f(x, u, t)$ with respect to the state constraints. \end{remark}
It should be noted that Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} is mainly of theoretical interest, since it does not seem possible to verify assumption~\ref{Assumpt_StateConstr_Decay_Finite_p} for any particular classes of optimal control problems appearing in practice. Nevertheless, let us give a simple and illuminating example of a problem in which this assumptions is satisfied.
\begin{example} \label{Example_StateInequalConstr_Exact}
Let $d = 2$ and $m = 1$. Define $U = \{ u \in L^{\infty}(0, T) \mid \| u \|_{\infty} \le 1 \}$, and consider the following variable-endpoint optimal control problem with the state inequality constraint: \begin{equation} \label{Ex_StateInequalConstr_Exact} \begin{split}
&\min \: \mathcal{I}(x) = \int_0^T \theta(x(t), t) dt + \zeta(x(T)) \\
&\text{s.t.} \quad \begin{cases} \dot{x}^1 = 1 \\ \dot{x}^2 = u \end{cases} \quad
x(0) = \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \quad x(T) \in S_T, \quad u \in U, \quad g(x(t)) \le 0, \end{split} \end{equation} where $g(x^1, x^2) = x^2$, the functions $\theta$ and $\zeta$ satisfy the assumptions of Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative}, and $S_T = \{ T \} \times [ - \beta, 0 ]$ for some $\beta \ge 0$. Then $(x, u)$ with $x(t) \equiv (t, 0)^T$ and $u(t) \equiv 0$ is a feasible point of this problem. Furthermore, by Proposition~\ref{Prop_CompleteMetricSpace} the space $X$ of trajectories of the system under consideration is compact in $(C[0, T])^d$. Thus, it remains to check that assumption~\ref{Assumpt_StateConstr_Decay_Finite_p} of Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} holds true. We will verify this assumptions with the use of the idea discussed in Remark~\ref{Remark_ShiftTowardsFeasibleRegion}.
Note that $g(x(0)) = 0$, i.e. Slater's condition is not satisifed. Fix any $x \in A \setminus \Omega$, and let $x$ corresponds to a control input $u \in U$. For any $n \in \mathbb{N}$ define $$
u_n(t) = \begin{cases}
u(t), & \text{if } x^2(t) \le 0, \\
\left(1 - \frac{1}{n} \right) u(t), & \text{if } x^2(t) > 0,
\end{cases}
\qquad
x_n(t) = \begin{pmatrix} t \\ x^2(t) - \frac{1}{n} \max\{ x^2(t), 0 \} \end{pmatrix} $$ Observe that $x_n$ is a trajectory of the system corresponding to the control input $u_n$, and for any $n \in \mathbb{N}$ one has $u_n \in U$, $x_n(0) = x(0) = (0, 0)^T$, $x(T) \in S_T$ by the definition of $A$, and
$x_n(T) \in S_T$, since by the definition of $S_T$ one has $x^2(T) \le 0$, which implies that $x_n(T) = x(T)$. Hence, in particular, $x_n \in A$ and $|x_n(T) - x(T)| = 0 \le \| x_n - x \|_p$ for all $n \in \mathbb{N}$. The sequence
$\{ x_n \}$ obviously converges to $x$ in $X$. Furthermore, note that $(x_n - x) / \| x_n - x \|_p = h$ with $h(\cdot) = (0, - \max\{ x^2(\cdot), 0 \} / \varphi(x))^T$ for all $n$, which obviously implies that the sequence
$\{ (x_n - x) / \| x_n - x \|_p \}$ converges to $h$, and $$
\int_0^T \phi(x(t), t)^{p - 1} \max_{v \in \partial_x \phi(x(t), t)} \langle v, h(t) \rangle \, dt
= - \frac{1}{\varphi(x)} \int_0^T \max\{ x^2(t), 0 \}^p \, dt = - \varphi(x)^{p - 1}, $$ i.e. assumption~\ref{Assumpt_StateConstr_Decay_Finite_p} of Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} is satisfied with $a = 1$ and any $\eta > 0$. Thus, by Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} one can conclude that for any $1 < p < + \infty$ there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ the penalised problem \begin{align*}
&\min \: \Phi_{\lambda}(x) = \int_0^T \theta(x(t), t) dt
+ \lambda \bigg( \int_0^T \max\{ x^2(t), 0 \}^p \, dt \bigg)^{1 / p} + \zeta(x(T)) \\
&\text{s.t.} \quad \begin{cases} \dot{x}^1 = 1 \\ \dot{x}^2 = u \end{cases}
x(0) = \begin{pmatrix} 0 \\ 0 \end{pmatrix}, \: x(T) \in S_T, \: u \in U \end{align*}
is equivalent to problem \eqref{Ex_StateInequalConstr_Exact} in the sense that these problems have the same optimal value, the same globally optimal solutions, as well as the same locally optimal solutions and inf-stationary points with respect to the pseudometric $d((x_1, u_1), (x_2, u_2)) = \| x_1 - x_2 \|_p + |x_1(T) - x_2(T)|$ in $W^2_{1, \infty}(0, T) \times L^{\infty}(0, T)$. \end{example}
Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} can be easily extended to the case of problems with state equality constraints. Namely, suppose that there is a single state equality constraint: $g(x(t), t) = 0$ for all
$t \in [0, T]$. Then one can define $\varphi(x) = \| g(x(\cdot), \cdot)\|_p$ for $1 < p < + \infty$. Arguing in a similar way to the proof of Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative} one can verify that this theorem remains to hold true for problems with one state equality constraint, if one replaces inequality \eqref{StateConstr_Decay_Finite_P} with the following one: \begin{equation} \label{StateEqualConstr_Decay_Finite_P}
\int_0^T |g(x(t), t)|^{p - 1} \sign(g(x(t), t)) \langle \nabla_x g(x(t), t), h(t) \rangle \, dt
\le - a \varphi(x)^{p - 1}. \end{equation} As in Remark~\ref{Remark_ShiftTowardsFeasibleRegion}, one can verify that this inequality is satisfied for
$h = y / \| y \|_p$ with $y = - g(x(t), t) \nabla_x g(x(t), t)$, provided $0 < a_1 \le | \nabla_x g(x, t) | \le a_2$ for all $x$ and $t$. Let us utilise this result to demonstrate that exact penalisation of state equality constraints is possible, if the cost functional $\mathcal{I}$ does not depend on the control inputs explicitly (cf.~Example~\ref{CounterExample_StateEqConstr} with which we started our analysis of state constrained problems).
\begin{example}
Let $d = 2$ and $m = 2$. Define $U = \{ u = (u^1, u^2)^T \in L^2_{\infty}(0, T) \mid \| u \|_{\infty} \le 1 \}$, and consider the following variable-endpoint optimal control problem with state equality constraint: \begin{equation} \label{Ex_StateEqualConstr_Exact} \begin{split}
&\min \: \mathcal{I}(x) = \int_0^T \theta(x(t), t) dt + \zeta(x(T)) \\
&\text{s.t.} \quad
\begin{cases}
\dot{x}^1 = u^1 \\
\dot{x}^2 = u^2
\end{cases}
x(0) = 0, \quad x(T) \in S_T, \quad u \in U, \quad g(x(t)) = x^1(t) + x^2(t) = 0 \quad \forall t \in [0, T]. \end{split} \end{equation} Here $S_T$ is a closed subset of the set $\{ x \in \mathbb{R}^2 \mid x^1 + x^2 = 0 \}$ such that $0 \in S_T$, while $\theta$ and $\zeta$ satisfy the assumptions of Theorem~\ref{Theorem_StateConstr_Nonlinear_Alternative}. Note that $(x, u)$ with $x(t) \equiv 0$ and $u(t) \equiv 0$ is a feasible point of problem \eqref{Ex_StateEqualConstr_Exact}. Furthermore, by Proposition~\ref{Prop_CompleteMetricSpace} the space $X$ of trajectories of the system under consideration is compact in $(C[0, T])^d$. Thus, as one can easily verify, there exists a globally optimal solution of \eqref{Ex_StateEqualConstr_Exact}.
Let us check that inequality \eqref{StateEqualConstr_Decay_Finite_P} holds true for any $x \in A \setminus \Omega$, i.e. for any trajectory violating the state equality constraint. Indeed, fix any such $x$, and let $x$ corresponds to a control input $u \in U$. For any $n \in \mathbb{N}$ define $$
u_n = u - \frac{g(u)}{n} \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \qquad
x_n = x - \frac{g(x)}{n} \begin{pmatrix} 1 \\ 1 \end{pmatrix}. $$ Clearly, $x_n$ is a trajectory of the system corresponding to $u_n$, and for any $n \in \mathbb{N}$ one has $x_n(0) = 0$ and $x_n(T) = x(T) \in S_T$, since by our assumptions $x \in A = \{ x \in X \mid x(T) \in S_T \}$ and $g(\xi) = 0$ for any $\xi \in S_T$. Note also that $u_n \in U$ for any $n \in \mathbb{N}$ due to the facts that $$
|u_n^1(t)| = \left| u^1(t) - \frac{u^1(t) + u^2(t)}{n} \right|
\le \frac{n - 1}{n} |u^1(t)| + \frac{1}{n} |u^2(t)| \le \frac{n - 1}{n} + \frac{1}{n} = 1
\quad \text{for a.e. } t \in (0, T), $$
and the same inequality holds true for $u_n^2$. Thus, $x_n \in A$ and $|x_n(T) - x(T)| = 0 \le \| x_n - x \|_p$
for all $n$. Moreover, for any $n \in \mathbb{N}$ one has $(x_n - x) / \| x_n - x \|_p = h$ with $h = ( - g(x) / \sqrt{2} \varphi(x), - g(x) / \sqrt{2} \varphi(x))^T$, which obviously implies that the sequence
$\{ (x_n - x) / \| x_n - x \|_p \}$ converges to $h$, and $$
\int_0^T |g(x(t))|^{p - 1} \sign(g(x(t))) \langle \nabla_x g(x(t)), h(t) \rangle \, dt
= - \frac{2}{\sqrt{2} \varphi(x)} \int_0^T |g(x(t))|^p \, dt = - \sqrt{2} \varphi(x)^{p - 1} $$ i.e. \eqref{StateEqualConstr_Decay_Finite_P} is satisfied with $a = \sqrt{2}$. Thus, one can conclude that for any $1 < p < + \infty$ the penalised problem \begin{align*}
&\min \: \Phi_{\lambda}(x) = \int_0^T \theta(x(t), t) dt
+ \lambda \bigg( \int_0^T |x^1(t) + x^2(t)|^p \, dt \bigg)^{1 / p} + \zeta(x(T)) \\
&\text{s.t.} \quad
\begin{cases}
\dot{x}^1 = u^1 \\
\dot{x}^2 = u^2
\end{cases}
x(0) = 0, \quad x(T) \in S_T, \quad u \in U \end{align*} is equivalent to problem \eqref{Ex_StateEqualConstr_Exact} for any sufficiently large $\lambda$. \end{example}
\section{Conclusions}
In the second paper of our study we analysed the exactness of penalty functions for optimal control problems with terminal and pointwise state constraints. We proved that penalty functions for fixed-endpoint optimal control problems for linear time-varying systems and linear evolution equations are completely exact, if the terminal state belongs to the relative interior of the reachable set. In the nonlinear case, the local exactness of the penalty function can be ensured under the assumption that the linearised system is completely controllable, while the complete exactness of the penalty function can be achieved under certain assumptions on the reachable set and the controllability of the system, which require further investigation.
We also proved that penalty functions for variable-endpoint optimal control problems for linear time-varying systems with convex terminal constraints are completely exact, if Slater's condition holds true. In the case of nonlinear variable-endpoint problems, the local exactness of a penalty function was proved under the assumptions that the linearised system is completely controllable, and the well-known Mangasarian-Fromovitz constraint qualification (MFCQ) holds true for terminal constraints.
In the case of problems with pointwise state inequality constraints, we showed that penalty functions for such problems for linear time-varying systems and linear evolution equations with convex state constraints are completely exact, if the $L^{\infty}$ penalty term is used, and Slater's condition holds true. In the nonlinear case we proved the local exactness of the $L^{\infty}$ penalty function under the assumption that a suitable constraint qualification is satisfied, which resembles MFCQ. We also proved that the exact $L^p$ penalisation of pointwise state constraints with finite $p$ is possible for convex problems, if Lagrange multipliers corresponding to state constraints belong to $L^{p'}(0, T)$, and for nonlinear problems, if the cost functional does not depend on the control inputs explicitly and some additional assumptions are satisfied.
A reason that the exact $L^p$ penalisation of state constraints with finite $p$ requires more restrictive assumption is indirectly connected to the Pontryagin maximum principle. Indeed, if the penalty function with $L^p$ penalty term is locally exact at a locally optimal solution $(x^*, u^*)$, then by definition for any sufficiently large $\lambda \ge 0$ the pair $(x^*, u^*)$ is a locally optimal solution of the penalised problem without state constraints: \begin{align*}
{}&\min \: \Phi_{\lambda}(x) = \int_0^T \theta(x(t), u(t), t) \, dt +
\lambda \bigg( \int_0^T \max\{ g_1(x(t), t), \ldots, g_l(x(t), t), 0 \}^p \, dt \bigg)^{1/p} \\
{}&\text{subject to } \dot{x}(t) = f(x(t), u(t), t) \, d \tau, \quad t \in [0, T], \quad
x(0) = x_0, \quad x(T) = x_T, \quad u \in U. \end{align*} It is possible to derive optimality conditions for this problem in the form of the Pontryagin maximum principle for the original problem, in which Lagrange multipliers corresponding to state constraints necessarily belong to $L^{p'}[0, T]$, if $p < + \infty$. Therefore, for the exactness of the $L^p$ penalty function for state constraints with finite $p$ it is necessary that there exists Lagrange multipliers corresponding to state constraints that belong to $L^{p'}[0, T]$. If no such multipliers exist, then the exact $L^p$-penalisation with finite $p$ is impossible.
Although we obtained a number of results on exact penalty functions for optimal control problems with terminal and pointwise state constraints, a further research in this area is needed. In particular, it is interesting to find verifiable sufficient conditions under which assumptions of Theorems~\ref{Theorem_FixedEndPointProblem_NonLinear}, \ref{Theorem_StateConstrainedProblem_Nonlinear}, and \ref{Theorem_StateConstr_Nonlinear_Alternative} on the complete exactness of corresponding penalty functions hold true in the nonlinear case. Moreover, the main results of our study can be easily extended to nonsmooth optimal control problems. In particular, one can suppose that the integrand $\theta$ is only locally Lipschitz continuous in $x$ and $u$, and impose the same growth conditions on the Clarke subdifferential (or some other suitable subdifferential), as we did on the derivatives of this functions. Also, it seems worthwhile to analyse connections between necessary/sufficient optimality conditions and the local exactness of penalty functions (cf.~the papers of Xing et al. \cite{Xing89,Xing94}, and Sections~4.6.2 and 4.7.2 in monograph~\cite{Polak_book}).
It should be noted that the general results on exact penalty functions that we utilised throughout our study are based on completely independent assumptions on the cost functional and constraints (see~Theorems~\ref{Theorem_CompleteExactness} and \ref{THEOREM_COMPLETEEXACTNESS_GLOBAL}). This approach allowed us to consider counterexamples in which the cost functionals were unrealistic from a practical point of view (cf.~Example~\ref{CounterExample_StateEqConstr}). Therefore, it seems profitable to obtain new general results on the exactness of penalty functions with the use of assumptions that are based on the interplay between the cost functional and constraints (cf.~such conditions for Huyer and Neumaier's penalty function in the finite dimensional case in Reference\cite{WangMaZhou}).
Finally, for obvious reasons in this two-part study we restricted our consideration to several ``classical'' problems. Our goal was not to apply the theory of exact penalty functions to as many optimal control problems as possible, but to demonstrate the main tools, as well as merits and limitations, of this theory on several standard problems, in the hope that it will help the interested reader to apply the exact penalty function method to the optimal control problem at hand.
\section*{Acknowledgments}
The author wishes to express his thanks to the coauthor of the first part of this study A.V. Fominyh for many useful discussions on exact penalty functions and optimal control problems that led to the development of this paper.
\section*{Appendix A. Proof of Theorem~\ref{THEOREM_COMPLETEEXACTNESS_GLOBAL}}
Observe that under the assumptions of Theorem~\ref{THEOREM_COMPLETEEXACTNESS_GLOBAL} assumptions of Theorem~\ref{Theorem_CompleteExactness} are satisfied for any $c \in \mathbb{R}$ and $\delta > 0$. Therefore, by this theorem there exists $\lambda^* \ge 0$ such that for any $\lambda \ge \lambda^*$ the optimal values and globally optimal solutions of the problems $(\mathcal{P})$ and \eqref{PenalizedProblem} coincide.
Let $L > 0$ be a Lipschitz constant of $\mathcal{I}$ on $A$, and fix any $x \in A \setminus \Omega$. By our assumption $\varphi^{\downarrow}_A(x) \le - a < 0$. By the definition of the rate of steepest descent there exists a sequence $\{ x_n \} \subset A$ converging to $x$ and such that $\varphi(x_n) - \varphi(x) < - a d(x_n, x) / 2$ for all $n \in \mathbb{N}$. Therefore $$
\Phi_{\lambda}(x_n) - \Phi_{\lambda}(x) = \mathcal{I}(x_n) - \mathcal{I}(x)
+ \lambda\big( \varphi(x_n) - \varphi(x) \big) \le \left( L - \lambda \frac{a}{2} \right) d(x_n, x) $$ for any $n \in \mathbb{N}$, which implies that $(\Phi_{\lambda})^{\downarrow}_A(x) < 0$ for all $\lambda > 2 L / a$ and $x \in A \setminus \Omega$. Thus, if $x^* \in A$ is an inf-stationary point/point of local minimum of $\Phi_{\lambda}$ on $A$ and $\lambda > 2 L / a$, then $x^* \in \Omega$. Here we used the fact that any point of local minimum of $\Phi_{\lambda}$ on $A$ is also an inf-stationary point of $\Phi_{\lambda}$ on $A$, since $(\Phi_{\lambda})^{\downarrow}_A(x) \ge 0$ is a necessary condition for local minimum.
Fix any $\lambda > 2 L / a$. Let $x^* \in A$ be a point of local minimum of the penalised problem \eqref{PenalizedProblem}. Then $x^* \in \Omega$. Hence bearing in mind the fact that by definition $\Phi_{\lambda}(x) = \mathcal{I}(x)$ for any $x \in \Omega$ one obtains that $x^*$ is a locally optimal solution of the problem $(\mathcal{P})$.
Let now $x^* \in \Omega$ be a locally optimal solution of $(\mathcal{P})$. Clearly, $x^* \in S_{\lambda}(c)$ for any $c > \Phi_{\lambda}(x^*)$. Hence by \cite[Lemma~1]{DolgopolikFominyh} there exists $r_1 > 0$ such that $\varphi(x) \ge a \dist(x, \Omega)$ for all $x \in B(x^*, r_1) \cap A$. Furthermore, by \cite[Lemma~2 and Remark~11]{DolgopolikFominyh} there exists $r_2 > 0$ such that $\mathcal{I}(x) - \mathcal{I}(x^*) \ge - L \dist(x, \Omega)$ for any $x \in B(x^*, r_2) \cap A$. Consequently, for any $x \in B(x^*, r) \cap A$ with $r = \min\{ r_1, r_2 \}$ one has $$
\Phi_{\lambda}(x) - \Phi_{\lambda}(x^*) = \mathcal{I}(x) - \mathcal{I}(x^*)
+ \lambda \big( \varphi(x) - \varphi(x^*) \big)
\ge \big( - L + \lambda a \big) \dist(x, \Omega) \ge 0, $$ i.e. $x^*$ is a locally optimal solution of the penalised problem \eqref{PenalizedProblem}. Thus, locally optimal solutions of the problems $(\mathcal{P})$ and \eqref{PenalizedProblem} coincide for any $\lambda > 2 L / a$.
Let now $x^* \in A$ be an inf-stationary point of $\Phi_{\lambda}$ on $A$. Then $x^* \in \Omega$. By definition $\Phi_{\lambda}(x) = \mathcal{I}(x)$ for any $x \in \Omega$, which yields $\mathcal{I}^{\downarrow}_{\Omega}(x^*) = (\Phi_{\lambda})^{\downarrow}_{\Omega}(x^*) \ge (\Phi_{\lambda})^{\downarrow}_A(x^*) \ge 0$, i.e. $x^*$ is an inf-stationary point of $\mathcal{I}$ on $\Omega$.
Let finally $x^* \in \Omega$ be an inf-stationary point of $\mathcal{I}$ on $\Omega$. By the definition of the rate of steepest descent there exists a sequence $\{ x_n \} \subset A$ converging to $x^*$ such that $$
(\Phi_{\lambda})^{\downarrow}_A(x^*)
= \lim_{n \to \infty} \frac{\Phi_{\lambda}(x_n) - \Phi_{\lambda}(x^*)}{d(x_n, x^*)}. $$ If there exists a subsequence $\{ x_{n_k} \} \subset \Omega$, then by the fact that $\varphi(x) = 0$ for all $x \in \Omega$ one gets that $$
(\Phi_{\lambda})^{\downarrow}_A(x^*)
= \lim_{k \to \infty} \frac{\Phi_{\lambda}(x_{n_k}) - \Phi_{\lambda}(x^*)}{d(x_{n_k}, x^*)}
= \lim_{k \to \infty} \frac{\mathcal{I}(x_{n_k}) - \mathcal{I}(x^*)}{d(x_{n_k}, x^*)}
\ge \mathcal{I}^{\downarrow}_{\Omega}(x^*) \ge 0. $$ Thus, one can suppose that $\{ x_n \} \subset A \setminus \Omega$.
Choose any $L' \in (L, \lambda a)$. By applying \cite[Lemmas~1 and 2]{DolgopolikFominyh} one obtains that \begin{align*}
\Phi_{\lambda}(x_n) - \Phi_{\lambda}(x^*)
&= \mathcal{I}(x_n) - \mathcal{I}(x^*) + \lambda \big( \varphi(x_n) - \varphi(x^*) \big) \\
&\ge - L' \dist(x_n, \Omega) - (L' - L)d(x_n, x^*) + \lambda a \dist(x_n, \Omega)
\ge - (L' - L)d(x_n, x^*) \end{align*} for any sufficiently large $n$. Dividing this inequality by $d(x_n, x^*)$, and passing to the limit as $n \to \infty$ one obtains that $(\Phi_{\lambda})^{\downarrow}_A(x^*) \ge - (L' - L)$, which implies that $(\Phi_{\lambda})^{\downarrow}_A(x^*) \ge 0$ due to the fact that $L' \in (L, \lambda a)$ was chosen arbitrarily. Consequently, $x^*$ is an inf-stationary point of $\Phi_{\lambda}$ on $A$. Thus, inf-stationary points of $\Phi_{\lambda}$ on $A$ coincide with inf-stationary points of $\mathcal{I}$ on $\Omega$ for any $\lambda > 2 L / a$, and the proof is complete.
\section*{Appendix B. Some Properties of Nemytskii Operators}
For the sake of completeness, in this appendix we give complete proofs of several well-known results on continuity and differentiability of Nemytskii operators (cf.~monograph~\cite{AppellZabrejko}). Firstly, we prove some auxiliary results related to state constraints of optimal control problems.
\begin{proposition} \label{Prop_ContNonlinearMap_in_C} Let $(Y, d)$ be a metric space, and $g \colon Y \times [0, T] \to \mathbb{R}$ be a continuous function. Then the operator $G(x)(\cdot) = g(x(\cdot), \cdot)$ continuously maps $C([0, T]; Y)$ to $C[0, T]$. \end{proposition}
\begin{proof}
Choose any $x \in C([0, T]; Y)$. Due to the continuity of $g$, for any $t \in [0, T]$ and $\varepsilon > 0$ there exists $\delta(t) > 0$ such that for all $y \in Y$ and $\tau \in [0, T]$ with $d(y, x(t)) + |t - \tau| < \delta(t)$ one has $|g(y, \tau) - g(x(t), t)| < \varepsilon / 2$. The set $K = \{ (x(t), t) \in Y \times \mathbb{R} \mid t \in [0, T] \}$ is compact as the image of the compact set $[0, T]$ under a continuous map. Therefore, there exist $N \in \mathbb{N}$ and $\{ t_1, \ldots, t_N \} \subset [0, T]$ such that $K \subset \cup_{k = 1}^N B( (x(t_k), t_k), \delta(t_k) / 2 )$. Define $\delta = \min_k \delta(t_k) / 2$.
Now, choose any $t \in [0, T]$ and $\overline{x} \in C([0, T]; Y)$ such that
$\| \overline{x} - x \|_{C([0, T]; Y)} < \delta$. By definition one has $d(\overline{x}(t), x(t)) < \delta$. Furthermore, there exists $k \in \{ 1, \ldots, N \}$ such that $(x(t), t) \in B( (x(t_k), t_k), \delta(t_k) / 2 )$, which due to the definition of $\delta$ implies that $(\overline{x}(t), t) \in B( (x(t_k), t_k), \delta(t_k) )$. Hence by the definition of $\delta(t_k)$ one has $$
\big| g(\overline{x}(t), t) - g(x(t), t) \big| \le
\big| g(\overline{x}(t), t) - g(x(t_k), t_k) \big| + \big| g(x(t_k), t_k) - g(x(t), t) \big|
< \frac{\varepsilon}{2} + \frac{\varepsilon}{2} = \varepsilon, $$
which yields $\| g(\overline{x}(\cdot), \cdot) - g(x(\cdot), \cdot) \|_{\infty} < \varepsilon$ due to the fact that $t \in [0, T]$ is arbitrary. Thus, the operator $G$ continuously maps $C([0, T]; Y)$ to $C[0, T]$. \end{proof}
\begin{corollary} \label{Corollary_StateConstrPenTerm_Contin} Let $(Y, d)$ be a metric space, and $g_j \colon Y \times [0, T] \to \mathbb{R}$, $j \in J = \{ 1, \ldots, l \}$, be continuous functions. Then the function $\varphi \colon C([0, T]; Y) \to \mathbb{R}$, $\varphi(x) = \max_{t \in [0, T]} \max_{j \in J} g_j(x(t), t)$ is continuous. \end{corollary}
\begin{proof} Fix any $x \in C([0, T]; Y)$. By Proposition~\ref{Prop_ContNonlinearMap_in_C} for any $\varepsilon > 0$ there exists $\delta > 0$ such that for any $\overline{x} \in B(x, \delta)$ one has
$\| g_j(\overline{x}(\cdot), \cdot) - g_j(x(\cdot), \cdot) \|_{\infty} < \varepsilon$ for all $j \in J$. Consequently, for any such $\overline{x}$ one has $$
g_j(\overline{x}(t), t) \le g_j(x(t), t) + \varepsilon \le \varphi(x) + \varepsilon \quad
\forall t \in [0, T], j \in J. $$ Taking the supremum, at first, over all $j \in J$, and then over all $t \in [0, T]$ one obtains that $\varphi(\overline{x}) \le \varphi(x) + \varepsilon$. Arguing in the same way but swapping $\overline{x}$ with $x$ one obtains that $\varphi(x) \le \varphi(\overline{x}) + \varepsilon$. Therefore,
$|\varphi(x) - \varphi(\overline{x})| < \varepsilon$, provided $\| x - \overline{x} \|_{C([0, T]; Y)} \le \delta$, i.e. $\varphi$ is continuous. \end{proof}
\begin{theorem} \label{Theorem_DiffStateConstr} Let a function $g \colon \mathbb{R}^d \times [0, T] \to \mathbb{R}$, $g = g(x, t)$, be continuous, differentiable in $x$, and the function $\nabla_x g$ be continuous. Then for any $p \in [1, + \infty]$ the Nemytskii operator $G(x) = g(x(\cdot), \cdot)$ maps $W^d_{1, p}(0, T)$ to $C[0, T]$, is continuously Fr\'{e}chet differentiable on $W^d_{1, p}(0, T)$, and its Fr\'{e}chet derivative has the form $D G(x)[h] = \nabla_x g(x(\cdot), \cdot) h(\cdot)$ for all $x, h \in W^{1, p}(0, T)$. \end{theorem}
\begin{proof} Recall that we identify $W^d_{1, p}(0, T)$ with the space of all those absolutely continuous function $x \colon [0, T] \to \mathbb{R}^d$ for which $\dot{x} \in L^d_p(0, T)$ (see, e.g. \cite{Leoni}). Hence bearing in mind the fact the function $g$ is continuous one obtains that for any $x \in W^d_{1, p}(0, T)$ one has $g(x(\cdot), \cdot) \in C[0, T]$, i.e. the operator $G$ maps $W^d_{1, p}(0, T)$ to $C[0, T]$. Let us check that this operator is Fr\'{e}chet differentiable.
Fix any $x, h \in W^d_{1, p}(0, T)$. By the mean value theorem for any $t \in [0, T]$ one has \begin{equation} \label{StateConstr_MeanValue}
\Big| \frac{1}{\alpha} \big( g(x(t) + \alpha h(t), t) - g(x(t), t) \big)
- \langle \nabla_x g(x(t), t), h(t) \rangle \Big| \le
\sup_{\eta \in (0, \alpha)} | \nabla_x g(x(t) + \eta h(t), t) - \nabla_x g(x(t), t) | \| h \|_{\infty}. \end{equation} By Proposition~\ref{Prop_ContNonlinearMap_in_C} the function $x \mapsto \nabla_x g(x(\cdot), \cdot)$ continuously maps $C[0, T]$ to $(C[0, T])^d$, which by inequality \eqref{SobolevImbedding} implies that it continuously maps $W^d_{1, p}(0, T)$ to $(C[0, T])^d$. Consequently, the right-hand side of \eqref{StateConstr_MeanValue} converges to zero uniformly on $[0, T]$ as $\alpha \to +0$. Thus, one has $$
\lim_{\alpha \to + 0}
\left\| \frac{G(x + \alpha h) - G(x)}{\alpha} - \nabla_x g(x(\cdot), \cdot)) h(\cdot) \right\|_{\infty} = 0, $$ i.e. the operator $G$ is G\^{a}teaux differentiable, and its G\^{a}teaux derivative has the form $D G(x)[h] = \nabla_x g(x(\cdot), \cdot) h(\cdot)$. Note that the map $D G(\cdot)$ is continuous, since the nonlinear operator $x \mapsto \nabla_x g(x(\cdot), \cdot)$ continuously maps $W^d_{1, p}(0, T)$ to $(C[0, T])^d$ by Proposition~\ref{Prop_ContNonlinearMap_in_C} and inequality \eqref{SobolevImbedding} Hence, as is well known, the operator $G$ is continuously Fr\'{e}chet differentiable, and its Fr\'{e}chet derivative coincides with the G\^{a}teaux one. \end{proof}
Let us also prove the differentiability of the Nemytskii operator $F(x, u) = \dot{x}(\cdot) - f(x(\cdot), u(\cdot), \cdot)$ associated with the nonlinear differential equation $\dot{x} = f(x, u, t)$.
\begin{theorem} \label{Theorem_DiffNemytskiiOperator} Let a function $f \colon \mathbb{R}^d \times \mathbb{R}^m \times [0, T] \to \mathbb{R}^d$, $f = f(x, u, t)$, be continuous, differentiable in $x$ in $u$, and the functions $\nabla_x f$ and $\nabla_u f$ be continuous. Suppose also that $q \ge p \ge 1$, and either $q = + \infty$ or $f$ and $\nabla_x f$ satisfy the growth condition of order $(q / p, p)$, while $\nabla_u f$ satisfies the growth condition of order $(q / s, s)$ with $s = qp / (q - p)$ in the case $q > p$, and $\nabla_u f$ does not depend on $u$ in the case $q = p$. Then the nonlinear operator $F(x, u) = (\dot{x}(\cdot) - f(x(\cdot), u(\cdot), \cdot), x(T))$ maps $X = W^d_{1, p}(0, T) \times L^m_q(0, T)$ to $L^d_p(0, T) \times \mathbb{R}^d$, is continuously Fr\'{e}chet differentiable, and its Fr\'{e}chet derivative has the form $$
D F(x, u)[h, v] = \begin{pmatrix} \dot{h}(\cdot) - A(\cdot) h(\cdot) - B(\cdot) v(\cdot) \\ h(T) \end{pmatrix},
\quad A(t) = \nabla_x f(x(t), u(t), t), \quad B(t) = \nabla_u f(x(t), u(t), t) $$ for any $(x, u) \in X$. \end{theorem}
\begin{proof} Let us prove that the Nemytskii operator $F_0(x, u) = f(x(\cdot), u(\cdot), \cdot)$ maps $X$ to $L^p_d(0, T)$, is continuously Fr\'{e}chet differentiable, and its Fr\'{e}chet derivative has the form \begin{equation} \label{NemytskiiOperatorDerivative}
D F_0(x, u)[h, v] = A(\cdot) h(\cdot) + B(\cdot) v(\cdot),
\quad A(t) = \nabla_x f(x(t), u(t), t), \quad B(t) = \nabla_u f(x(t), u(t), t) \end{equation} for any $(x, u) \in X$ and $(h, v) \in X$. With the use of this result one can easily prove that the conclusion of the theorem holds true.
Fix any $(x, u) \in X$. By inequality \eqref{SobolevImbedding} there exists $R > 0$ such that $\| x \|_{\infty} \le R$. Then by the growth condition on the function $f$ there exist $C_R > 0$ and an a.e. nonnegative function $\omega_R \in L^p(0, T)$ such that $$
|f(x(t), u(t), t)|^p \le \big( C_R |u(t)|^{q/p} + \omega_R(t) \big)^p
\le 2^p \big( C_R^p |u(t)|^q + \omega_R(t)^p \big). $$ for a.e. $t \in (0, T)$. Observe that the right-hand side of this inequality belongs to $L^1(0, T)$. Therefore, $F_0(x, u) = f(x(\cdot), u(\cdot), \cdot) \in L_p^d(0, T)$, i.e. the operator $F_0$ maps $X$ to $L^p_d(0, T)$. Now we turn to the proof of the Fr\'{e}chet differentiability of this operator. Let us consider two cases.
\textbf{Case $q = + \infty$.} Fix any $(x, u) \in X$, $(h, v) \in X$ and $\alpha \in (0, 1]$. By the mean value theorem for a.e. $t \in (0, T)$ one has \begin{multline} \label{MeanValue_NemytskiiOperator_Infty}
\frac{1}{\alpha} \big| f(x(t) + \alpha h(t), u(t) + \alpha v(t), t) - f(x(t), u(t), t)
- \alpha \nabla_x f(x(t), u(t), t) h(t) - \alpha \nabla_u f(x(t), u(t), t) v(t) \big| \\
\le \sup_{\eta \in (0, \alpha)} \esssup_{t \in [0, T]}
\big| \nabla_x f(x(t) + \eta h(t), u(t) + \eta v(t), t) - \nabla_x f(x(t), u(t), t) \big| |h(t)| \\
+ \sup_{\eta \in (0, \alpha)} \esssup_{t \in [0, T]}
\big| \nabla_u f(x(t) + \eta h(t), u(t) + \eta v(t), t) - \nabla_u f(x(t), u(t), t) \big| |v(t)|. \end{multline} With the use of the facts that all functions $x$, $h$, $u$, and $v$ are essentially bounded on $[0, T]$, and the functions $\nabla_x f$ and $\nabla_u f$ are uniformly continuous one the compact set
$B(\mathbf{0}_d, \| x \|_{\infty} + \| h \|_{\infty}) \times B(\mathbf{0}_m, \| u \|_{\infty} + \| v \|_{\infty}) \times [0, T]$ (here $\mathbf{0}_d$ is the zero vector from $\mathbb{R}^d$) one can verify that the right-hand side of \eqref{MeanValue_NemytskiiOperator_Infty} converges to zero as $\alpha \to +0$. Observe also that $A(\cdot) = \nabla_x f(x(\cdot), u(\cdot), \cdot) \in L^{d \times d}_{\infty}(0, T)$ and $B(\cdot) = \nabla_u f(x(\cdot), u(\cdot), \cdot) \in L^{d \times m}_{\infty}(0, T)$ due to the continuity of $\nabla_x f$ and $\nabla_u f$ and the essential boundedness of $x$ and $u$. Hence, as is easy to check, the mapping $(h, v) \mapsto A(\cdot) h(\cdot) + B(\cdot) v(\cdot)$ is a bounded linear operator from $X$ to $L^d_{\infty}(0, T)$ (and, therefore, to $L^d_p(0, T)$). Thus, one has \begin{align*}
\lim_{\alpha \to 0}
&\left\| \frac{1}{\alpha} \big( F_0(x + \alpha h, u + \alpha v) - F_0(x, u) \big) - D F_0(x, u) [h, v] \right\|_p \\
&\le T^{1/p} \lim_{\alpha \to 0}
\left\| \frac{1}{\alpha} \big( F_0(x + \alpha h, u + \alpha v) - F_0(x, u) \big) - D F_0(x, u) [h, v]
\right\|_{\infty} = 0, \end{align*} where $D F_0(x, u)[h, v]$ is defined as in \eqref{NemytskiiOperatorDerivative} (here $1 / p = 0$, if $p = + \infty$). Consequently, the Nemytskii operator $F_0$ is G\^{a}teaux differentiable at every point $(x, u) \in X$, and its G\^{a}teaux derivative has the form \eqref{NemytskiiOperatorDerivative}.
Let us check that the G\^{a}teaux derivative $D F_0(\cdot)$ is continuous on $X$. Then, as is well-known, $F_0$ is continuously Fr\'{e}chet differentiable on $X$, and its Fr\'{e}chet derivative coincides with $D F_0(\cdot)$. Fix any $(x, u) \in X$ and $(x', u') \in X$. For any $(h, v) \in X$ one has \begin{align*}
\| D F_0(x, u)[h, v] - D F_0(x', u')[h, v] \|_p
&\le T^{1/p} \| D F_0(x, u)[h, v] - D F_0(x', u')[h, v] \|_{\infty} \\
&\le T^{1/p} \esssup_{t \in [0, T]}
\big| \nabla_x f(x(t), u(t), t) - \nabla_x f(x'(t), u'(t), t) \big| \| h \|_{\infty} \\
&+ T^{1/p} \esssup_{t \in [0, T]}
\big| \nabla_u f(x(t), u(t), t) - \nabla_u f(x'(t), u'(t), t) \big| \| v \|_{\infty}. \end{align*} Hence with the use of \eqref{SobolevImbedding} one obtains that there exists $C_p > 0$ (depending only on $p$ and $T$) such that \begin{align*}
\| D F_0(x, u) - D F_0(x', u') \| &\le T^{1/p} C_p \esssup_{t \in [0, T]}
\big| \nabla_x f(x(t), u(t), t) - \nabla_x f(x'(t), u'(t), t) \big| \\
&+ T^{1/p} \esssup_{t \in [0, T]} \big| \nabla_u f(x(t), u(t), t) - \nabla_u f(x'(t), u'(t), t) \big|. \end{align*}
Utilising this inequality and taking into account the fact that the functions $\nabla_x f$ and $\nabla_u f$ are continuous one can verify via a simple $\varepsilon-\delta$ argument that $\| D F_0(x, u) - D F_0(x', u') \| \to 0$ as $(x', u') \to (x, u)$ in $X$ (cf. the proof of Proposition~\ref{Prop_ContNonlinearMap_in_C}). Thus, the mapping $D F_0(\cdot)$ is continuous, and the proof of the case $q = + \infty$ is complete.
\textbf{Case $q < + \infty$}. Fix any $(x, u) \in X$, $(h, v) \in X$ and $\alpha \in (0, 1]$. By the mean value theorem \begin{multline} \label{MeanValue_NemytskiiOperator}
\frac{1}{\alpha} \big| f(x(t) + \alpha h(t), u(t) + \alpha v(t), t) - f(x(t), u(t), t)
- \alpha \nabla_x f(x(t), u(t), t) h(t) - \alpha \nabla_u f(x(t), u(t), t) v(t) \big|^p \\
\le 2^p \sup_{\eta \in (0, \alpha)}
\big| \nabla_x f(x(t) + \eta h(t), u(t) + \eta v(t), t) - \nabla_x f(x(t), u(t), t) \big|^p |h(t)|^p \\
+ 2^p \sup_{\eta \in (0, \alpha)}
\big| \nabla_u f(x(t) + \eta h(t), u(t) + \eta v(t), t) - \nabla_u f(x(t), u(t), t) \big|^p |v(t)|^p \end{multline} for a.e. $t \in (0, T)$. Our aim is to apply Lebesgue's dominated convergence theorem.
The right-hand side of \eqref{MeanValue_NemytskiiOperator} converges to zero as $\alpha \to 0$ for a.e. $t \in (0, T)$ due to the continuity of $\nabla_x f$ and $\nabla_u f$. By applying \eqref{SobolevImbedding}, and the facts that $\alpha \in (0, 1]$ and $\nabla_x f$ satisfies the growth condition of order $(q / p, p)$ one obtains that there exist $C_R > 0$ and an a.e. nonnegative function $\omega_R \in L^p(0, T)$ such that \begin{multline*}
\sup_{\eta \in (0, \alpha)}
\big| \nabla_x f(x(t) + \eta h(t), u(t) + \eta v(t), t) - \nabla_x f(x(t), u(t), t) \big|^p |h(t)|^p \\
\le 2^p \sup_{\eta \in (0, \alpha)} \big| \nabla_x f(x(t) + \eta h(t), u(t) + \eta v(t), t) \big|^p
C_p \| h \|_{1, p}^p
+ 2^p \sup_{\eta \in (0, \alpha)} \big| \nabla_x f(x(t), u(t), t) \big|^p C_p \| h \|_{1, p}^p \\
\le 2^{2p} \Big( \big( C_R^p 2^q (|u(t)|^q + |v(t)|^q) + \omega_R(t)^p \big)
+ \big( C_R^p |u(t)|^q + \omega_R(t)^p \big) \Big) C_p \| h \|_{1, p}^p \end{multline*} for a.e. $t \in (0, T)$. Observe that the right-hand side of this inequality belongs to $L^1(0, T)$ and does not depend on $\alpha$, i.e. the first term in the right-hand side of \eqref{MeanValue_NemytskiiOperator} can be bounded above by a function from $L^1(0, T)$ that is independent of $\alpha$.
Let us now estimate the second term in the right-hand side of \eqref{MeanValue_NemytskiiOperator}. Let $q > p$. Bearing in mind the fact that $\nabla_u f$ satisfies the growth condition of order $(q/s, s)$ one obtains that there exists $C_R > 0$ and an a.e. nonnegative function $\omega_R \in L^s(0, T)$ such that \begin{multline*}
\sup_{\eta \in (0, \alpha)}
\big| \nabla_u f(x(t) + \eta h(t), u(t) + \eta v(t), t) - \nabla_u f(x(t), u(t), t) \big|^p |v(t)|^p \\
\le 2^p \Big( \big| C_R 2^{q/s} (|u(t)|^{q/s} + |v(t)|^{q/s}) + \omega_R(t) \big|^p
+ \big| C_R |u(t)|^{q/s} + \omega_R(t) \big|^p \Big) |v(t)|^p \end{multline*} for a.e. $t \in (0, T)$. Let us check that the right-hand side of this inequality belongs to $L^1(0, T)$. Indeed, by applying H\"{o}lder's inequality of the form \begin{equation} \label{HolderInequality_p_to_s_q}
\Big(\int_0^T |y_1(t)|^p |y_2(t)|^p \, dt \Big)^{1/p} \le \| y_1 \|_s \| y_2 \|_q \end{equation} (here we used the fact that $(q/p)' = s/p$) one gets that \begin{multline*}
\Big( \int_0^T \Big| C_R 2^{q/s} (|u(t)|^{q/s} + |v(t)|^{q/s}) + \omega_R(t) \Big|^p |v(t)|^p \, dt \Big)^{1/p} \\
\le \Big\| C_R 2^{q/s} (|u(\cdot)|^{q/s} + |v(\cdot)|^{q/s}) + \omega_R(\cdot) \Big\|_s \| v \|_q
\le \Big( C_R 2^{q/s} \big( \| u \|_q^{q/s} + \| v \|_q^{q/s} \big) + \| \omega_R \|_s \Big) \| v \|_q < + \infty. \end{multline*} Thus, the last term in the right-hand side of inequality \eqref{MeanValue_NemytskiiOperator} can also be bounded above by a function from $L^1(0, T)$ that does not depend on $\alpha$.
Finally, recall that in the case $q = p$ the function $\nabla_u f$ does not depend on $u$, which implies that it satisfies the growth condition of order $(0, + \infty)$, i.e. for any $R > 0$ there exists $C_R > 0$ such that
$|\nabla_u f(x, u, t)| \le C_R$ for a.e. $t \in (0, T)$ and for all $(x, u) \in \mathbb{R}^d \times \mathbb{R}^m$ with
$|x| \le R$. Therefore, as is easy to check, in this case there exists $C > 0$ (that does not depend on $\alpha$) such that $$
\sup_{\eta \in (0, \alpha)}
\big| \nabla_u f(x(t) + \eta h(t), u(t) + \eta v(t), t) - \nabla_u f(x(t), u(t), t) \big|^p |v(t)|^p
\le C |v(t)|^q $$ for a.e. $t \in (0, T)$. The right-hand side of this inequality obviously belongs to $L^1(0, T)$.
Thus, the right-hand side of \eqref{MeanValue_NemytskiiOperator} can be bounded above by a function from $L^1(0, T)$ that does not depend on $\alpha$. Furthermore, from the growth conditions on $\nabla_x f$ and $\nabla_u f$ it follows that $A(\cdot) = \nabla_x f(x(\cdot), u(\cdot), \cdot) \in L^{d \times d}_p(0, T)$ and $B(\cdot) = \nabla_u f(x(\cdot), u(\cdot), \cdot) \in L^{d \times m}_s(0, T)$, which, as is easily seen, implies that the mapping $(h, v) \mapsto A(\cdot) h(\cdot) + B(\cdot) v(\cdot)$ is a bounded linear operator from $X$ to $L^d_p(0, T)$ (here $s = + \infty$ in the case $p = q$). Therefore, integrating \eqref{MeanValue_NemytskiiOperator} from $0$ to $T$ and passing to the limit as $\alpha \to 0$ with the use of Lebesgue's dominated convergence theorem one obtains that $$
\lim_{\alpha \to 0}
\left\| \frac{1}{\alpha} \big( F_0(x + \alpha h, u + \alpha v) - F_0(x, u) \big) - D F_0(x, u) [h, v] \right\|_p = 0, $$ where $D F_0(x, u)[h, v]$ is defined as in \eqref{NemytskiiOperatorDerivative}. Thus, the Nemytskii operator $F_0$ is G\^{a}teaux differentiable at every point $(x, u) \in X$, and its G\^{a}teaux derivative has the form \eqref{NemytskiiOperatorDerivative}. Let us check that this derivative is continuous. Then one can conclude that $F_0$ is continuously Fr\'{e}chet differentiable on $X$, and its Fr\'{e}chet derivative coincides with $D F_0(\cdot)$.
Indeed, choose any $(x, u) \in X$ and $(x', u') \in X$. With the use of \eqref{NemytskiiOperatorDerivative} and H\"{o}lder's inequality of the form \eqref{HolderInequality_p_to_s_q} one obtains \begin{align*}
\| D F_0(x, u)[h, v] - D F_0(x', u')[h, v] \|_p
&\le \| \nabla_x f(x(\cdot), u(\cdot), \cdot) - \nabla_x f(x'(\cdot), u'(\cdot), \cdot) \|_p \| h \|_{\infty} \\
&+ \| \nabla_u f(x(\cdot), u(\cdot), \cdot) - \nabla_u f(x'(\cdot), u'(\cdot), \cdot) \|_s \| v \|_q \end{align*} for any $(h, v) \in X$ (in the case $q = p$ we put $s = \infty$). Hence taking into account \eqref{SobolevImbedding} one gets $$
\| D F_0(x, u) - D F_0(x', u') \|
\le C_p \| \nabla_x f(x(\cdot), u(\cdot), \cdot) - \nabla_x f(x'(\cdot), u'(\cdot), \cdot) \|_p
+ \| \nabla_u f(x(\cdot), u(\cdot), \cdot) - \nabla_u f(x'(\cdot), u'(\cdot), \cdot) \|_s $$ Therefore, the mapping $(x, u) \mapsto D F_0(x, u)$ is continuous in the operator norm, if for any sequence $\{ (x_n, u_n) \}$ in $X$ converging to $(x, u)$ the sequence $\{ \nabla_x f(x_n(\cdot), u_n(\cdot), \cdot) \}$ converges to $\nabla_x f(x(\cdot), u(\cdot), \cdot)$ in $L^{d \times d}_p(0, T)$, while the sequence $\{ \nabla_u f(x_n(\cdot), u_n(\cdot), \cdot) \}$ converges to $\nabla_u f(x(\cdot), u(\cdot), \cdot)$ in $L^{d \times m}_s(0, T)$.\footnote{Note that in the case $p = q < + \infty$ one must prove that the sequence $\{ \nabla_u f(x_n(\cdot), u_n(\cdot), \cdot) \}$ converges in $L^{d \times m}_{\infty}(0, T)$, while $\{ u_n \}$ converges only in $L^m_q(0, T)$ with $q < + \infty$. That is why in this case one must assume that $\nabla_u f$ does not depend on $u$, i.e. $f$ is affine in control}
Let us prove the convergence of the sequence $\{ \nabla_x f(x_n(\cdot), u_n(\cdot), \cdot) \}$. The convergence of the sequence $\{ \nabla_u f(x_n(\cdot), u_n(\cdot), \cdot) \}$ can be proved in the same way. Arguing by reductio ad absurdum, suppose that there exists a sequence $\{ (x_n, u_n) \} \subset X$ converging to $(x, u)$ such that the sequence $\{ \nabla_x f(x_n(\cdot), u_n(\cdot), \cdot) \}$ does not converge to $\nabla_x f(x(\cdot), u(\cdot), \cdot)$ in $L^{d \times d}_p(0, T)$. Then there exist $\varepsilon > 0$ and a subsequence $\{ (x_{n_k}, u_{n_k}) \}$ such that \begin{equation} \label{NonConvergenceInLp}
\big\| \nabla_x f(x_{n_k}(\cdot), u_{n_k}(\cdot), \cdot) - \nabla_x f(x(\cdot), u(\cdot), \cdot) \big\|_p
\ge \varepsilon
\quad \forall k \in \mathbb{N} \end{equation} It should be noted that all functions $\{ \nabla_x f(x_n(\cdot), u_n(\cdot), \cdot) \}$ belong to $L^{d \times d}_p(0, T)$ due to the fact that $\nabla_x f$ satisfies the growth condition of order $(q/p, p)$.
By \eqref{SobolevImbedding} the sequence $\{ x_{n_k} \}$ converges to $x$ uniformly on $[0, T]$, which implies that
$\| x_{n_k} \|_{\infty} \le R$ for all $k \in \mathbb{N}$ and some $R > 0$. The sequence $\{ u_{n_k} \}$ converges to $u$ in $L^m_q(0, T)$. Hence, as is well-known, there exists a subsequence, which we denote again by $\{ u_{n_k} \}$, that converges to $u$ almost everywhere. Consequently, by the continuity of $\nabla_x f$ the subsequence $\{ \nabla_x f(x_{n_k}(t), u_{n_k}(t), t) \}$ converges to $\nabla_x f(x(t), u(t), t)$ for a.e. $t \in (0, T)$.
The sequence $\{ u_{n_k} \}$ converges to $u$ in $L^m_q(0, T)$. Therefore, by the ``only if'' part of Vitali's theorem characterising convergence in $L^p$ spaces (see, e.g. \cite[Theorem~III.6.15]{DunfordSchwartz}) for any $\varepsilon > 0$ there exists $\delta(\varepsilon) > 0$ such that for any Lebesgue measurable set $E \subset (0, T)$ with $\mu(E) < \delta(\varepsilon)$ (here $\mu$ is the Lebesgue measure) one has
$\int_E |u_{n_k}|^q \, d \mu < \varepsilon$ for all $k \in \mathbb{N}$. Hence by applying the fact that $\nabla_x f$ satisfies the growth condition of order $(q/p, p)$ one obtains that there exist $C_R > 0$ and an a.e. nonnegative function $\omega_R \in L^p(0, T)$ such that for any measurable set $E \subset (0, T)$ with $\mu(E) < \delta(\varepsilon)$ one has $$
\int_E |\nabla_x f(x_{n_k}(t), u_{n_k}(t), t)|^p d \mu(t) \le \int_E | C_R |u|^{q/p} + \omega_R |^p d \mu
\le 2^p \Big( C_R^p \varepsilon + \int_E \omega_R^p d \mu \Big). $$ Taking into account the absolute continuity of the Lebesgue integral and the fact that $\omega_R \in L^p(0, T)$, and decreasing $\delta(\varepsilon) > 0$, if necessary, one can suppose that $\int_E \omega_R^p d \mu < \varepsilon$. Therefore, choosing a sufficiently small $\varepsilon > 0$ one can make the integral
$\int_E |\nabla_x f(x_{n_k}(t), u_{n_k}(t), t)|^p d \mu(t)$ arbitrarily small for all $k \in \mathbb{N}$ and measurable sets $E \subset (0, T)$ with $\mu(E) < \delta(\varepsilon)$. Consequently, by the ``if'' part of Vitali's theorem on convergence in $L^p$ spaces the sequence $\{ \nabla_x f(x_{n_k}(\cdot), u_{n_k}(\cdot), \cdot) \}$ converges to $\nabla_x f(x(\cdot), u(\cdot), \cdot)$ in $L^{d \times d}_p(0, T)$, which contradicts \eqref{NonConvergenceInLp}. \end{proof}
\end{document} | arXiv |
GLE73 Event (October 28, 2021) in Solar Cosmic Rays
Yu. V. Balabin1,
B. B. Gvozdevsky1,
A. V. Germanenko1,
E. A. Maurchev1 &
E. A. Michalko1
Bulletin of the Russian Academy of Sciences: Physics volume 86, pages 1542–1548 (2022)Cite this article
Results are presented from analyzing the GLE73 event in terms of solar cosmic rays. The GLE73 event raised the count by 2–6% at polar stations of the World Neutron Monitor Network. A direct solution to the inverse problem is found, along with and the energy spectra of solar cosmic rays at the boundary of the magnetosphere are obtained and the pitch angle distribution of the flux.
Avoid the most common mistakes and prepare your manuscript for journal editors.
Ground level enhancement (GLE) events are caused by eruptions on the Sun that are accompanied by solar flares or coronal mass injections. Such processes often generate solar energetic particles (SEPs) (mainly protons) with energies of up to hundreds of MeV in the Sun and their emission into the interplanetary space. GLE events are extreme cases of SEPs where the proton energy can be more than 430 MeV (1 GW) [1].
The first GLE event of the new 25th solar activity cycle occurred on October 28, 2021, and was detected both on spacecraft and by ground-based stations of the World Neutron Monitor (NM) Network. These were mainly NMs that had an atmospheric cutoff rigidity of 1 GW or a geomagnetic cutoff rigidity close to it.
It should be noted that the new GLE event continued the series of low-amplitude events that began as early as the middle of the 24th cycle. However, the fair number of stations that recorded it with amplitudes above 1% (the mean threshold of fluctuation on a standard NM) allows us to analyze the event without simplifications. This requires data from a minimum number of stations with amplitudes above 2% [2]. This number is not strictly observed and depends on the amplitude of the increase at the NM. Based on the experience from analyzing many GLE events, the minimum number of NMs for a correct interpretation is around 20.
EVENT OF OCTOBER 28, 2021
The event lasted ~3 h and had a maximum amplitude of 6%. The highest amplitude was detected at Calgary and Fort Smith stations in North America. The neutron monitors in Apatity and Barentsburg (Svålbard) recorded amplitudes of 2–4%. The event (codenamed GLE73) originated from beta–gamma active region A2887 with coordinates S28W02, a flare of class X1.0, an X-ray emission maximum at 15:35 UT, and a type II/VI flare. The start of the event was first detected at the South Pole station at 15:55 UT. The GLE73 event increased the cosmic ray flux by 2–6% at polar NMs. Low-latitude stations did not detect this increase, but several mid-latitude stations whose asymptotic receiving cones were near the axis of anisotropy did. The first of these was the Calgary NM, located at a height of 1200 m. The data indicate the solar cosmic ray (SCR) spectrum was soft. The interplanetary and geomagnetic situation during the day of the GLE event was quiet, the Kp index was 1, the Dst index was near 0, and the solar wind speed and density were moderate. This means the configuration of the interplanetary magnetic field (IMF) generally corresponded to a typical Parker spiral.
The technique we created and used to determining parameters of the primary proton flux at the magnetosphere's boundary requires that we calculate asymptotic cones (ACs) of reception for NMs with high accuracy and the model of the magnetosphere that most accurately describes the state of the Earth's magnetosphere. We used Tsyganenko's T-01 model [3], which works well in analyzing other GLE events. ACs were calculated for all stations in the 1 to 20 GW range of atmospheric cutoff rigidity. In our technique, there is no averaging or calculating of the effective penumbra rigidity, since the ACs of all stations are calculated in the abovementioned 1–20 GW range of rigidity with a constant step of 0.001 GW, and forbidden rigidities (values of particles that cannot penetrate from the interplanetary space to a given NM station) are marked in a special array and excluded from calculations when determining the response of this NM. This eliminates the error introduced by calculating the effective penumbra rigidity because this quantity depends on the type of the spectrum, which is not known before solving the inverse problem. Our map of asymptotic cones of reception for a series of stations is presented in Fig. 1. The station names start from the AC edge corresponding to 20 GW. The position of the axis of anisotropy is marked by plus signs; the lines of equal pitch angles are shown by black dots, and the corresponding values of the pitch angle are shown. The ACs were calculated for 17:00 UT using the T-01 magnetosphere model.
Map of asymptotic receiving cones for some high-latitude stations at 17 UT: Thule (THUL), Inuvik (INUK), Fort Smith (FRSM), Calgary (CALG), Peawanuck (PWNK), Jang Bogo (JGBO), Nain (NAIN), SANAE (SNAE), South Pole (SOPO), Dome C (DOMC), Mawson (MWSN), Kerguelen (KERG), Apatity (APTY), Barentsburg (BRBG), Norilsk (NRLK), and Tixie Bay (TXBY). The plus signs denote the calculated axis of SCR anisotropy; the asterisks, the points of intersection of the celestial sphere and the IMF axis. Lines of equal pitch angles are shown by black dots; the numbers near them show the pitch angle.
The growth profiles of the Calgary and Fort Smith NM stations during the GLE73 event (Fig. 2) are typical of a GLE: a rather sharp front and a slow decline, which is observed when the receiving cones of the stations are near the axis of anisotropy and acquire an anisotropic flux propagating along the axis of anisotropy with weak scattering and reaching the Earth first. The smooth growth shows that the station received a scattered particle flux whose density grew gradually and with a delay.
Profiles of count growth at NM stations: (a) Apatity (APTY) and Barentsburg (BRBG); (b) Tixie Bay (TXBY) and Yakutsk (YKTK); (c) Calgary (CALG) and South Pole (SOPO); and (d) Fort Smith (FSMT) and Dome C (DOMC). Profiles of the Calgary, South Pole, and Dome C high-mountain stations are normalized to a barometric level of 1000 mb. Five-minute data are used.
Some NMs are located in mountains. Due to the barometric effect, the amplitude of growth at these stations is considerably higher than when an NM is at sea level at the same geographic point. This is because the effective path lengths of galactic and solar cosmic rays in the atmosphere differ (~140 and ~100 g/cm2, respectively), and SCRs are absorbed more strongly by the atmosphere [4]. The magnitude of growth at different heights mut be adjusted to a common barometric level. Since most NMs are located near sea level, it is convenient to take 1000 mb as the base value. The adjustment to a common barometric level is done using two lengths of attenuation [5]. It is after barometric correction that the South Pole and Dome C NMs ceased to be among stations with the highest amplitude of growth, and Fort Smith and Peawanuck were the stations with the highest amplitude in the GLE73 event. These NMs can be used to solve the inverse problem only after performing all the described procedures.
Even though mid- and low-latitude NMs showed no increase in the SCR flux during the GLE73 event, some NMs with a zero increase must be in the list of the relevant stations. Such stations with a zero increase mark the upper limit of the energy spectrum of SCRs. Mid-latitude stations have an extended AC that covers more than 180° of longitude near the equator. Particles in the SCR flux above the cutoff threshold of such stations would raise the flux at them. We used Russian and European mid-latitude NMs with cutoff rigidities of 3–5 GW, e.g., Novosibirsk, Moscow, and Jungfrau. However, an excess of such stations is undesirable because the mean growth amplitude (mean in the number of used NMs) is reduced. The residue is reduced as well, and the minimum search becomes difficult. Hermanus, Potchefstroom, and other low-latitude stations we know could not exhibit an increase were therefore not used in solving the inverse problem. A total of 27 NM stations were selected for analysis, among which there were around 20 polar and near-polar stations.
The convergence of the solution depends on calculating the ACs that were used, so we are required to estimate the correspondence of the specified date and time, other parameters, and calculated ACs to the real position of the Earth in space. The hour of 17 UT corresponds to a turn of the prime meridian of Greenwich by 75° in the GSE coordinate system from the direction to the Sun. Apatity is located at ~35° E. The drift of protons with rigidities of 10–20 GW in the Earth's magnetosphere is 40°–60°, depending on the state of the magnetosphere. As a result, the AC of Apatity in the range of 10–20 GW must be located in the 150°–170° range of longitudes. The interplanetary magnetic field under quiet conditions extends from the Sun to the Earth along a Parker spiral. At the Earth's orbit, the angle between the tangent to this spiral and the direction to the Sun is 30°–60° to the West, depending on the speed of the solar wind. It is also known the ACs of Kerguelen and Apatity always intersect when the magnetosphere is not too disturbed. All of the above corresponds to the map in Fig. 1.
SOLVING THE INVERSE PROBLEM
Parameters of SCRs arriving at the magnetosphere's boundary from interplanetary space are determined by solving the inverse problem with data from the ground-based NM network. In other words, characteristics of the SCR flux (the energy spectrum, pitch angle distribution, and direction of arrival) are chosen so that the increases (responses) calculated by these characteristics on the world NM network have minimal discrepancies with ones actually detected. The general way of solving the inverse problem was proposed for the first time in [6].
A key aspect of any way of solving the inverse problem is that the forms of the functional dependences specifying the relation between characteristics implicitly determine both the accuracy of the solution and the form of these dependences itself. For example, specifying the power form of the spectrum restricts possible solutions to only power dependences. When the real SCR spectrum has another form (e.g., exponential), any solution will have a large error that cannot be eliminated by optimizing the algorithm or choosing other parameters.
We must therefore specify the form of the dependences in the most general terms. This approach is used in our technique. SCR rigidity spectrum I(R) is specified in the form
$$I\left( R \right) = {{J}_{0}}{{R}^{{ - \gamma - \Delta \gamma \left( {R - 1} \right)}}},$$
where J0 is the SCR flux at R = 1, γ is the spectrum exponent, and Δγ is the spectrum correction.
This way of specifying the spectrum was proposed in [7] and has actively been used in our other works, e.g., [8]). It allows us to obtain different forms of the spectrum. The power spectrum is obtained when Δγ = 0. It has been determined empirically that for any reasonable value of R0, there exists a pair of values (γ, Δγ) such that functional dependence (1) on finite range of rigidity 1–20 GW coincides with the exponent gaving characteristic rigidity R0. Other values of the pair (γ, Δγ) specify a spectrum intermediate between the power and exponential forms.
The pitch angle distribution of SCRs can have different forms [9]. The simplest form of the pitch angle distribution (PAD) is Gaussian and was used in [6], where the computational capacities were modest. However, it does not always correspond to conditions of SCR propagation and scattering in interplanetary space. The PADs observed in the GLE events had a linear dependence [10], a dip at angles of ~90°, and an additional flux from the reverse (antisolar) direction. The last is possible when magnetic loop structures are extended from the Sun into interplanetary space, as was shown in [11, 12]. As when specifying the spectrum, it is important to create the most general form that can reflect the functional dependence at different values of the parameters, e.g., Gaussian, linear, bidirectional. The form we chose to specify the pitch angle distribution was
$$F\left( \theta \right) = {\text{exp}}\left( {\frac{{ - {\kern 1pt} {{\theta }^{2}}}}{c}} \right) \cdot \left[ {1 - a{\text{exp}}\left( {\frac{{ - {{{\left( {\theta - \frac{\pi }{2}} \right)}}^{2}}}}{b}} \right)} \right],$$
where θ is the pitch angle, c is the parameter determining the PAD width, and the factor in square brackets forms the PAD singularity at angles close to 90°. Analysis of expression (2) shows the possibilities of this form are much broader than the mere creation of a singularity at angles of ~90°. At the same time, the variety of PAD forms is obtained using only three parameters. Combinations of parameters (c, a, b) can yield a PAD with a linear form. At a = 0, expression (2) takes the form of a simple Gaussian. Values 1 > a > 0 and b \( \ll \) c result in a dip at pitch angles near 90°. At a < 0, a hump appears in the PAD near 90°. Both a displacement of the PAD maximum from 0° to any angle to 90° if a < 0 and a linear PAD at a > 0 are possible at b ≈ c. Expression (2) was used in seeking the solution to the inverse problem for a series of GLE events in [8, 11, 12].
Finally, the expression for calculating the response of the L-th NM has the form
$$\Delta {{N}_{L}} = \sum\limits_{R = 1}^{20} {I\left( R \right)F\left( {{{\theta }_{L}}\left( R \right)} \right)S\left( R \right){{A}_{L}}\left( R \right)dR} ,$$
where ΔN is the increase at the Lth NM station, S(R) is the tabulated specific collection function, and AL(R) is an array containing the list of admissible and forbidden rigidities for the Lth NM that forms when calculating the AC. Summation is done with the same rigidity step dR = 0.001 GW as in calculating the AC. The left-hand side of the expression is a function of six parameters (γ, Δγ, c, a, b, Ω, and Φ). Angles Ω and Φ determine the position of the axis of anisotropy. They are implicitly contained in (2) because the pitch angle is determined relative to a certain direction specified by angles Ω and Φ in the spherical coordinate system. The total residue throughout the NM network is expressed as
$$\begin{gathered} G\left( {{{\gamma }},{{\Delta \gamma }},c,a,b,{{\Theta }},{{\Phi }}} \right) \\ = \,\,\sum\limits_L {{{{\left( {{{\Delta }}{{N}_{L}}\left( {{{\gamma }},{{\Delta \gamma }},c,a,b,{{\Theta }},{{\Phi }}} \right) - {{\delta }}{{N}_{L}}} \right)}}^{2}}} , \\ \end{gathered} $$
where δNL is the increase actually measured at the Lth NM station. G expresses the sum of squared differences between the calculated NM response to the SCR flux specified by the parameters (γ, Δγ, c, a, b, Ω, Φ) and the real increase at the NM. The minimum of function G corresponds to the solution to the inverse problem.
Searching for the minimum of a multiparametric function is a complicated matter, but present-day computational capacities and new numerical means allow us to obtain a fairly stable solution to expression (4). Note that proceeding from the above preliminary analysis of the growth profiles, initial values of the parameters listed in the function G can be found not far from the minimum point. For example, the axis of anisotropy lies between the ACs of Inuvik and Peawanuck near Fort Smith and Calgary, which exhibited the maximum increase and a sharp leading front. This allows us to remain in the region of a stable solution, accelerates the search for the minimum of expression (4), and makes it easier to find.
SCR SPECTRA
The inverse problem is usually solved using 5-min NM data. The result is a sequence of spectra and other parameters of the SCR flux during the main GLE period. This sequence is determined with the same 5‑min step and describes the dynamics of SCR spectra over the GLE period. Even though the start of the increase was first recorded at 15:55 UT, the inverse problem can be solved only after 16:25 UT, when a sufficient number of NMs had recorded an amplitude of at least 2%. After 18:00 UT, the number of stations showing an amplitude high enough for solving the problem again became less than was needed.
Figure 3 presents the SCR spectra recalculated to the energy scale at certain points in time. Table 1 gives the numerical values of the parameters. The flux is determined in the units shown in the plot of the spectra; the second column presents characteristic energy E0 (GeV) or spectral exponent γ (values of E0 are in the column on the left; values of γ, on the right), and Ω and Φ are presented in degrees. Figure 4 shows the pitch angle distributions of SCRs.
Energy spectra of SCRs for typical times in the (a) semilogarithmic and (b) double logarithmic scales. The lines in the left plot show the exponential dependence. The lines to the right show the power dependence.
Table 1. Parameters of the SCR flux at different moments in time
(a) Pitch angle distributions of the SCR flux at 16:25, 16:30, and 16:40 UT in Cartesian coordinates and (b) the same PAD in polar coordinates. (c) Pitch angle distributions of SCRs at 16:40, 16:55, 17:05, and 17:45 UT in Cartesian coordinates and (d) the same PAD in polar coordinates. The PAD at 16:40 is shown for a comparison of scales.
The spectrum at the beginning of the event (16:25 and 16:30 UT) had an exponential form. It then started to transition to the power form. This behavior corresponds to most GLE events processed in our technique [11]. The spectrum was still exponential at 16:40 UT, but considerably softer (characteristic energy E0 is lower). By the time we reach the maximum at 17:00 UT, the spectrum has become a power spectrum. Judging from the spectrum, the flux was halved at 16:30 UT. However, the PAD shows this happened only at small pitch angles (<40°). The reduction was negligible at large pitch angles. There is a simple explanation for this. When there is a brief flare on the Sun, particles moving with small pitch angles reach the Earth rapidly in a bunch and travel on, as is indicated by the drop in the flux density at small pitch angles. Particles scattered to large pitch angles drift more slowly along IMF lines and spread along them. The PAD also did not change shape appreciably after the first bunch passes. There is only a proportional increase in the flux at all angles with simultaneous softening of the spectrum.
The asterisks in Fig. 1 mark the intersection of the line along which the IMF vector lies and the celestial sphere (referred to as the IMF axis). For SCR protons propagating in the IMF, the direction of the magnetic field vector (from or to the Sun) is not important. What is important is the field line. We can see the IMF axis is quite far from the calculated axis of anisotropy. This is because the IMF was weak (only ~3 nT) during the second half of the day on October 28, as the ACE data show. Component Bx oscillated strongly from −2 to 2 nT with a change in sign. The value of Bz was positive most of the time, while By had consistently negative values. The longitude of the IMF axis shifted to −60° at positive Bx values, and to 60° at negative values. It is the position of the axis that is reflected on the map for 17 UT. The SCR flux is weakly sensitive to such rapid oscillations of the direction of the IMF vector. Calculations show the Larmor radius of a proton with a rigidity of 1 GW in a magnetic field of 3 nT is more than 1 million kilometers. This small IMF structure therefore has no effect on the motion of SCRs that have rigidities of one to several gigawatts.
GLE73 was generally an ordinary event. It was distinguished by neither its parameters nor the shapes of its profile. Other GLEs had low amplitudes throughout the 24th cycle, and GLE73 continued this series in the new 25th cycle.
The form of the PAD (at pitch angles <90°) was conspicuously linear for much of the event. A similar one was observed in the GLE70 event on December 13, 2006 [8] and in GLE71 (May 17, 2012), but it was very rare in events of 2000–2005 and earlier [11]. This could have been due to conditions at the source in the Sun at the instant of the flare, the weak interplanetary magnetic field, and its calmness. This feature of a PAD could be the subject of a separate study of the state of the IMF according to SCRs. Recall that the linear form of the PAD was not specified in solving the inverse problem. It appeared due to a special combination of the parameters (c, a, b) determining the general form of a PAD.
Results were presented from analyzing the first event of the 25th cycle in terms of solar cosmic rays (GLE73 of October 28, 2021). The energy spectra and other parameters of the solar cosmic ray flux beyond the Earth's magnetosphere were obtained by solving the inverse problem through much of the event. GLE73 was an ordinary event. In the initial phase, the SCR spectrum had an exponential form. It then smoothly transitioned into a power spectrum. Characteristic energy E0 ≈ 0.6 GeV, and spectrum exponent γ ≈ 5.5. These are the most typical values of GLEs [11].
Miroshnichenko, L.I., J. Space Weather Space Clim., 2018, vol. 8, A52.
Miroshnichenko, L.I., Klein, K.-L., Trottet, G., et al., J. Geophys. Res., 2005, vol. 110, A09S08.
Tsyganenko, N.A., J. Geophys. Res., 2002, vol. 107, p. 1176.
Dorman, L.I., Eksperimental'nye i teoreticheskie osnovy astrofiziki kosmicheskikh luchei (Experimental and Theoretical Foundations of Cosmic Ray Astrophysics), Moscow: Nauka, 1975.
Kaminer, N.S., Geomagn. Aeron., 1967, vol. 7, no. 5, p. 806.
Shea, M.A. and Smart, D.F., Space Sci. Rev., 1982, vol. 32, p. 251.
ADS Google Scholar
Cramp, J.L., Duldig, M.L., Flückiger, E.O., et al., J. Geophys. Res., 1997, vol. 102, no. A11, 24237.
Vashenyuk, E.V., Balabin, Yu.V., Gvozdevskii, B.B., and Shchur, L.I., Geomagn. Aeron., 2008, vol. 48, no. 2, p. 149.
Bieber, J.W., Evenson, P.A., and Pomerantz, M.A., J. Geophys. Res., 1986, vol. 91, no. A8, p. 8713.
Bieber, J.W., Evenson, P.A., Pomerantz, M.A., et al., Astrophys. J., 1994, vol. 420, p. 294.
Vashenyuk, E.V., Balabin, Yu.V., and Gvozdevsky, B.B., Astrophys. Space Sci. Trans., 2011, vol. 7, p. 459.
Balabin, Yu.V., Vashenyuk, E.V., Mingalev, O.V., et al., Astron. Rep., 2005, vol. 82, no. 10, p. 837.
This work was supported by the Russian Science Foundation, project no. 18-77-10018.
Polar Geophysical Institute, 184209, Apatity, Russia
Yu. V. Balabin, B. B. Gvozdevsky, A. V. Germanenko, E. A. Maurchev & E. A. Michalko
Yu. V. Balabin
B. B. Gvozdevsky
A. V. Germanenko
E. A. Maurchev
E. A. Michalko
Correspondence to Yu. V. Balabin.
Translated by A. Nikol'skii
This article is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what re-use is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and re-use information, please contact the Rights and Permissions team.
Balabin, Y.V., Gvozdevsky, B.B., Germanenko, A.V. et al. GLE73 Event (October 28, 2021) in Solar Cosmic Rays. Bull. Russ. Acad. Sci. Phys. 86, 1542–1548 (2022). https://doi.org/10.3103/S1062873822120048
Received: 29 July 2022
Issue Date: December 2022 | CommonCrawl |
5.6: Negative Exponents
[ "article:topic", "license:ccbyncsa", "showtoc:no" ]
Book: Beginning Algebra (Redden)
5: Polynomials and Their Operations
Negative Exponents
Simplify expressions with negative integer exponents.
Work with scientific notation.
In this section, we define what it means to have negative integer exponents. We begin with the following equivalent fractions:
\(\frac{1}{8}=\frac{4}{32}\)
Notice that \(4, 8\), and \(32\) are all powers of \(2\). Hence we can write \(4=2^{2}, 8=2^{3}, and 32=2^{5}\).
\(\frac{1}{2^{3}}=\frac{1}{8}=\frac{4}{32}=\frac{2^{2}}{2^{5}}\)
If the exponent of the term in the denominator is larger than the exponent of the term in the numerator, then the application of the quotient rule for exponents results in a negative exponent. In this case, we have the following:
\(\color{Cerulean}{\frac{1}{2^{3}}}\color{black}{=\frac{1}{8}=\frac{4}{32}=\frac{2^{2}}{2^{5}}=2^{2-5}=}\color{Cerulean}{2^{-3}}\)
We conclude that \(2^{−3}=\frac{1}{2}^{3}\). This is true in general and leads to the definition of negative exponents. Given any integer \(n\) and \(x≠0\), then
\[x^{-n}=\frac{1}{x^{n}}\]
Here \(x≠0\) because \frac{1}{0}\) is undefined. For clarity, in this section, assume all variables are nonzero.
Simplifying expressions with negative exponents requires that we rewrite the expression with positive exponents.
Example \(\PageIndex{1}\)
Simplify:
\(10^{-2}\).
\(\begin{aligned} 10^{-2}&=\frac{1}{10^{2}} \\ &=\frac{1}{100} \end{aligned}\)
\(\frac{1}{100}\)
\((-3)^{-1}\).
\(\begin{aligned} (-3)^{-1}&=\frac{1}{(-3)^{1}} \\ &=-\frac{1}{3} \end{aligned}\)
\(-\frac{1}{3}\)
\(\frac{1}{y^{-3}}\).
\(\begin{aligned} \frac{1}{y^{-3}} &=\frac{1}{\frac{1}{y^{3}}} \\ &=1\cdot \frac{y^{3}}{1} \\ &=y^{3} \end{aligned}\)
\(y^{3}\)
At this point we highlight two very important examples,
Figure 5.6.1
If the grouped quantity is raised to a negative exponent, then apply the definition and write the entire grouped quantity in the denominator. If there is no grouping, then apply the definition only to the base preceding the exponent.
\((2ab)^{-3}\).
First, apply the definition of −3 as an exponent and then apply the power of a product rule.
\(\begin{aligned} (2ab)^{-3} &=\frac{1}{(2ab)^{3}} \qquad\color{Cerulean}{Apply\:the\:negative\:exponent.} \\ &=\frac{1}{2^{3}a^{3}b^{3}} \qquad\color{Cerulean}{Apply\:the\:power\:rule\:for\:a\:product.} \\ &=\frac{1}{8a^{3}b^{3}} \end{aligned}\)
\(\frac{1}{8a^{3}b^{3}}\)
\((-3xy^{3})^{-2}\).
\(\begin{aligned} (-3xy^{3})^{-2}&=\frac{1}{(-3xy^{3})^{2}} \\&=\frac{1}{(-3)^{2}x^{2}(y^{3})^{2}} \\ &=\frac{1}{9x^{2}y^{6}} \end{aligned}\)
\(\frac{1}{9x^{2}y^{6}}\)
\(\frac{x^{-3}}{y^{-4}}\).
\(\frac{x^{-3}}{y^{-4}}=\frac{\frac{1}{x^{3}}}{\frac{1}{y^{4}}}=\frac{1}{x^{3}}\cdot\frac{y^{4}}{1}=\frac{y^{4}}{x^{3}}\)
\(\frac{y^{4}}{x^{3}}\)
The previous example suggests a property of quotients with negative exponents. If given any integers \(m\) and \(n\), where \(x≠0\) and \(y≠0\), then
\[\frac{x^{-n}}{y^{-m}}=\frac{y^{m}}{x^{n}}\]
In other words, negative exponents in the numerator can be written as positive exponents in the denominator, and negative exponents in the denominator can be written as positive exponents in the numerator.
\(\frac{-2x^{-5}y^{3}}{z^{-2}}\).
Take care with the coefficient \(−2\); recognize that this is the base and that the exponent is actually \(+1:\: −2=(−2)^{1}\). Hence the rules of negative exponents do not apply to this coefficient; leave it in the numerator.
\(\begin{aligned} \frac{-2x^{-5}y^{3}}{z^{-2}}&=\frac{-2\color{Cerulean}{x^{-5}}\color{black}{y^{3}}}{\color{OliveGreen}{z^{-2}}} \\ &=\frac{-2y^{3}\color{OliveGreen}{z^{2}}}{\color{Cerulean}{x^{5}}} \end{aligned}\)
\(\frac{-2y^{3}z^{2}}{x^{5}}\)
\(\frac{(-3x^{-4})^{-3}}{y^{-2}}\).
\(\begin{aligned} \frac{(-3x^{-4})^{-3}}{y^{-2}}&=\frac{(-3)^{-3}(x^{-4})^{-3}}{y^{-2}} &\color{Cerulean}{Apply\:the\:product\:to\:a\:power\:rule.} \\ &=\frac{(-3)^{-3}x^{12}}{y^{-2}} &\color{Cerulean}{Power\:rule} \\ &=\frac{x^{12}y^{2}}{(-3)^{3}} &\color{Cerulean}{Negative\:exponents} \\ &=\frac{x^{12}y^{2}}{-27} \\ &-\frac{x^{12}y^{2}}{27} \end{aligned}\)
\(-\frac{x^{12}y^{2}}{27}\)
\(\frac{(3x^{2})^{-4}}{(-2y^{-1}z^{3})^{-2}}\).
\(\begin{aligned} \frac{(3x^{2})^{-4}}{(-2y^{-1}z^{3})^{-2}}&=\frac{3^{-4}(x^{2})^{-4}}{(-2)^{-2}(y^{-1})^{-2}(z^{3})^{-2}} &\color{Cerulean}{Product\:to\:a\:power\:rule} \\ &=\frac{3^{-4}x^{-8}}{(-2)^{-2}y^{2}z^{-6}} &\color{Cerulean}{Power\:rule} \\ &=\frac{(-2)^{2}z^{6}}{3^{4}x^{8}y^{2}} &\color{Cerulean}{Negative\:exponents} \\&=\frac{4z^{6}}{81x^{8}y^{2}} \end{aligned}\)
\(\frac{4z^{6}}{81x^{8}y^{2}}\)
Example \(\PageIndex{10}\)
\(\frac{(5x^{2}y)^{3}}{x^{-5}y^{-3}}\).
First, apply the power of a product rule and then the quotient rule.
\(\frac{(5x^{2})^{3}}{x^{-5}y^{-3}} = \frac{5^{3}x^{6}y^{3}}{x^{-5}y^{-3}}=5^{3}x^{6-(-5)}y^{3-(-3)}=5^{3}x^{6+5}y^{3+3}=125x^{11}y^{6}\)
\(125x^{11}y^{6}\)
To summarize, we have the following rules for negative integer exponents with nonzero bases:
Negative exponents:
\(x^{-n}=\frac{1}{x^{n}}\)
Quotients with negative exponents:
\(\frac{x^{-n}}{y^{-m}}=\frac{y^{m}}{x^{n}}\)
Table 5.6.1
\(\frac{(-5xy^{-3})^{-2}}{5x^{4}y^{-4}}\).
\(\frac{y^{10}}{125x^{6}}\)
Real numbers expressed in scientific notation have the form
\(a\times 10^{n}\)
where \(n\) is an integer and \(1≤a<10\). This form is particularly useful when the numbers are very large or very small. For example,
\(\begin{array}{cc}{9,460,000,000,000,000m=9.46\times 10^{15}m}&{\color{Cerulean}{One\:light\:year}}\\{0.000000000025m=2.5\times 10^{-11}m}&{\color{Cerulean}{Radius\:of\:a\:hydrogen\:atom}} \end{array}\)
It is cumbersome to write all the zeros in both of these cases. Scientific notation is an alternative, compact representation of these numbers. The factor \(10^{n}\) indicates the power of \(10\) to multiply the coefficient by to convert back to decimal form:
This is equivalent to moving the decimal in the coefficient fifteen places to the right. A negative exponent indicates that the number is very small:
This is equivalent to moving the decimal in the coefficient eleven places to the left.
Converting a decimal number to scientific notation involves moving the decimal as well. Consider all of the equivalent forms of \(0.00563\) with factors of \(10\) that follow:
\(\begin{aligned} 0.00563&=0.0563\times 10^{-1} \\ &=0.563\times 10^{-2} \\&\color{Cerulean}{=5.63\times 10^{-3}} \\&=56.3\times 10^{-4} \\&=563\times 10^{-5} \end{aligned}\)
While all of these are equal, \(5.63×10^{−3}\) is the only form considered to be expressed in scientific notation. This is because the coefficient \(5.63\) is between \(1\) and \(10\) as required by the definition. Notice that we can convert \(5.63×10^{−3}\) back to decimal form, as a check, by moving the decimal to the left three places.
Write \(1,075,000,000,000\) using scientific notation.
Here we count twelve decimal places to the left of the decimal point to obtain the number \(1.075\).
\(1,075,000,000,000=1.075\times 10^{12}\)
\(1.075\times 10^{12}\)
Write \(0.000003045\) using scientific notation.
Here we count six decimal places to the right to obtain \(3.045\).
\(0.000003045=3.045\times 10^{-6}\)
\(3.045\times 10^{-6}\)
Often we will need to perform operations when using numbers in scientific notation. All the rules of exponents developed so far also apply to numbers in scientific notation.
Multiply:
\((4.36×10^{−5})(5.3×10^{12})\).
Use the fact that multiplication is commutative and apply the product rule for exponents.
\(\begin{aligned} (4.36×10^{−5})(5.3×10^{12})&=(4.36\cdot 5.30)\times (10^{-5}\cdot 10^{12}) \\&=\color{Cerulean}{23.108}\color{black}{\times 10^{-5+12}} \\&=\color{Cerulean}{2.3108\times 10^{1}}\color{black}{\times 10^{7}} \\&=2.3108\times 10^{1+7} \\ &=2.3108\times 10^{8} \end{aligned}\)
\(2.3108\times 10^{8}\)
Divide:
\((3.24\times 10^{8})\div (9.0\times 10^{-3})\).
\(\begin{aligned} \frac{(3.24\times 10^{8})}{(9.0\times 10^{-3})}&= \left( \frac{3.24}{9.0} \right) \times \left( \frac{10^{8}}{10^{-3}} \right) \\ &=0.36\times 10^{8-(-3)} \\&=\color{Cerulean}{0.36}\color{black}{\times 10^{8+3}} \\&=\color{Cerulean}{3.6\times 10^{-1}}\color{black}{\times 10^{11}} \\&=3.6\times 10^{-1+11} \\ &=3.6\times 10^{10} \end{aligned}\)
\(3.6\times 10^{10}\)
The speed of light is approximately \(6.7×10^{8}\) miles per hour. Express this speed in miles per second.
A unit analysis indicates that we must divide the number by \(3,600\).
\(\begin{aligned} 6.7\times 10^{8} \:mph &=\frac{6.7\times 10^{8}miles}{1\cancel{\color{red}{hour}}}\color{black}{\cdot}\left( \frac{1\cancel{\color{red}{hour}}}{60\cancel{\color{OliveGreen}{minutes}}} \right)\cdot \left( \frac{1\cancel{\color{OliveGreen}{minutes}}}{60 seconds} \right) \\&=\frac{6.7\times 10^{8}miles}{3600 seconds} \\&=\left(\frac{6.7}{3600} \right)\times 10^{8} \\ &\approx\color{Cerulean}{0.0019}\color{black}{\times 10^{8}} \qquad\color{Cerulean}{Rounded\:to\:two\:significant\:digits} \\ &=\color{Cerulean}{1.9\times 10^{-3}}\color{black}{\times 10^{8}} \\ &=1.9\times 10^{-3+8} \\ &=1.9\times 10^{5} \end{aligned}\)
The speed of light is approximately \(1.9×10^{5}\) miles per second.
By what factor is the radius of the sun larger than the radius of earth?
\(\begin{aligned} 6,300,000m &=6.3\times 10^{6}m\qquad\color{Cerulean}{Radius\:of\:Earth} \\ 700,000,000m &=7.0\times 10^{8}m\qquad\color{Cerulean}{Radius\:of\:the\:Sun} \end{aligned}\)
We want to find the number that when multiplied times the radius of earth equals the radius of the sun.
\(\begin{aligned}n\cdot \color{Cerulean}{radius\:of\:the\:Earth}&=\color{OliveGreen}{radius\:of\:the\:Sun} \\n&=\frac{\color{OliveGreen}{radius\:of\:the\:Sun}}{\color{Cerulean}{radius\:of\:the\:Earth}} \end{aligned}\)
\(\begin{aligned} n&=\frac{7.0\times 10^{8}m}{6.3\times 10^{6}m} \\ &=\frac{7.0}{6.3}\times\frac{10^{8}}{10^{6}} \\ &\approx 1.1\times 10^{8-6} \\ &=1.1\times 10^{2} \\ &=110 \end{aligned}\)
\((6.75\times 10^{-8})\div (9\times 10^{-17})\).
\(7.5\times 10^{8}\)
Expressions with negative exponents in the numerator can be rewritten as expressions with positive exponents in the denominator.
Expressions with negative exponents in the denominator can be rewritten as expressions with positive exponents in the numerator.
Take care to distinguish negative coefficients from negative exponents.
Scientific notation is particularly useful when working with numbers that are very large or very small.
Exercise \(\PageIndex{3}\) Negative Exponents
Simplify. (Assume variables are nonzero.)
\(5^{−1}\)
\((−7)^{−1}\)
\(−7^{−1}\)
\(\frac{1}{2}^{−3}\)
\((\frac{3}{5})^{−2}\)
\((−\frac{2}{3})^{−4}\)
\(x^{−4}\)
\(y^{−1}\)
\(3x^{−5}\)
\((3x)^{−5}\)
\(\frac{1}{y^{−3}}\)
\(\frac{5}{2}x^{−1}\)
\(\frac{x^{−1}}{y^{−2}}\)
\(\frac{1}{(x−y)^{−4}}\)
\(\frac{x^{2}y^{−3}}{z^{−5}}\)
\(\frac{x}{y^{−3}}\)
\((ab)^{−1}\)
\(\frac{1}{(ab)^{−1}}\)
\(−5x^{−3}y^{2}z^{−4}\)
\(\frac{3}{−2x^{3}y^{−5}z}\)
\(3x^{-4}y^{2}\cdot 2x^{-1}y^{3}\)
\(−10a^{2}b^{3}⋅2a^{−8}b^{−10}\)
\((2a^{−3})^{−2}\)
\((−3x^{2})^{−1}\)
\((5a^{2}b^{−3}c)^{−2}\)
\((7r^{3}s^{−5}t)^{−3}\)
\((−2r^{2}s^{0}t^{−3})^{−1}\)
\((2xy^{−3}z^{2})^{−3}\)
\((−5a^{2}b^{−3}c^{0})^{4}\)
\((−x^{−2}y^{3}z^{−4})^{−7}\)
\((\frac{1}{2}x^{−3})^{−5}\)
\((2xy^{2})^{−2}\)
\((x^{2}y^{−1})^{−4}\)
\((−3a^{2}bc^{5})^{−5}\)
\((\frac{20x^{−3}y^{2}}{5yz^{−1}})^{−1}\)
\((\frac{4r^{5}s^{−3}t^{4}}{2r^{3}st^{0}})^{−3}\)
\((\frac{2xy^{3}z^{−1}}{y^{2}z^{3}})^{−3}\)
\((−\frac{3a^{2}bc}{ab^{0}c^{4}})^{2}\)
\((\frac{−xyz}{x^{4}y^{−2}z^{3}})^{−4}\)
\((−\frac{125x^{−3}y^{4}z^{−5}}{5x^{2}y^{4}(x+y)^{3}})^{0}\)
\((x^{n})^{−2}\)
\((x^{n}y^{n})^{−2}\)
1. \(\frac{1}{5}\)
3. \(−\frac{1}{7}\)
5. \(8\)
7. \(\frac{25}{9}\)
9. \(\frac{81}{16}\)
11. \(\frac{1}{x^{4}}\)
13. \(3x^{5}\)
15. \(y^{3}\)
17. \(\frac{y^{2}}{x}\)
19. \(\frac{x^{2}z^{5}}{y^{3}}\)
21. \(\frac{1}{ab}\)
23. \(\frac{−5y^{2}}{x^{3}z^{4}}\)
25. \(\frac{6y^{5}}{x^{5}}\)
27. \(\frac{a^{6}}{4}\)
29. \(\frac{b^{6}}{25a^{4}c^{2}}\)
31. \(−\frac{t^{3}}{2r^{2}}\)
33. \(\frac{625a^{8}}{b^{12}}\)
35. \(32x^{15}\)
37. \(\frac{y^{4}}{x^{8}}\)
39. \(\frac{x^{3}}{4yz}\)
41. \(\frac{z^{12}}{8x^{3}y^{3}}\)
43. \(\frac{x^{12}z^{8}}{y^{12}}\)
45. \(\frac{1}{x^{2n}}\)
The value in dollars of a new MP3 player can be estimated by using the formula \(V=100(t+1)^{−1}\), where \(t\) is the number of years after purchase.
How much was the MP3 player worth new?
How much will the MP3 player be worth in \(1\) year?
How much will the MP3 player be worth in \(4\) years?
How much will the MP3 player be worth in \(99\) years?
According to the formula, will the MP3 ever be worthless? Explain.
1. $\(100\)
3. $\(20\)
5. $\(1\)
Exercise \(\PageIndex{5}\) Scientific Notation
Convert to a decimal number.
\(9.3×10^{9}\)
\(1.004×10^{4}\)
\(6.08×10^{10}\)
\(4.01×10^{−7}\)
\(1.0×10^{−10}\)
\(9.9×10^{−3}\)
\(7.0011×10^{−5}\)
1. \(9,300,000,000\)
3. \(60,800,000,000\)
5. \(0.000000401\)
7. \(0.0099\)
Rewrite using scientific notation.
\(500,000,000\)
\(407,300,000,000,000\)
\(9,740,000\)
\(100,230\)
\(0.0000123\)
\(0.000012\)
\(0.000000010034\)
\(0.99071\)
1. \(5×10^{8}\)
3. \(9.74×10^{6}\)
5. \(1.23×10^{−5}\)
7. \(1.0034×10^{−8}\)
Perform the indicated operations.
\((3×10^{5})(9×10^{4})\)
\((8×10^{−22})(2×10^{−12})\)
\((2.1×10^{−19})(3.0×10^{8})\)
\((4.32×10^{7})(1.50×10^{−18})\)
\(9.12×10^{−9}3.2×10^{10}\)
\(1.15×10^{9}2.3×10^{−11}\)
\(1.004×10^{−8}2.008×10^{−14}\)
\(3.276×10^{25}5.2×10^{15}\)
\(59,000,000,000,000 × 0.000032\)
\(0.0000000000432 × 0.0000000000673\)
\(1,030,000,000,000,000,000 ÷ 2,000,000\)
\(6,045,000,000,000,000 ÷ 0.00000005\)
The population density of earth refers to the number of people per square mile of land area. If the total land area on earth is \(5.751×10^{7}\) square miles and the population in 2007 was estimated to be \(6.67×10^{9}\) people, then calculate the population density of earth at that time.
In 2008 the population of New York City was estimated to be \(8.364\) million people. The total land area is \(305\) square miles. Calculate the population density of New York City.
The mass of earth is \(5.97×10^{24}\) kilograms and the mass of the moon is \(7.35×10^{22}\) kilograms. By what factor is the mass of earth greater than the mass of the moon?
The mass of the sun is \(1.99×10^{30}\) kilograms and the mass of earth is \(5.97×10^{24}\) kilograms. By what factor is the mass of the sun greater than the mass of earth? Express your answer in scientific notation.
The radius of the sun is \(4.322×10^{5}\) miles and the average distance from earth to the moon is \(2.392×10^{5}\) miles. By what factor is the radius of the sun larger than the average distance from earth to the moon?
One light year, \(9.461×10^{15}\) meters, is the distance that light travels in a vacuum in one year. If the distance to the nearest star to our sun, Proxima Centauri, is estimated to be \(3.991×10^{16}\) meters, then calculate the number of years it would take light to travel that distance.
It is estimated that there are about \(1\) million ants per person on the planet. If the world population was estimated to be \(6.67\) billion people in 2007, then estimate the world ant population at that time.
The sun moves around the center of the galaxy in a nearly circular orbit. The distance from the center of our galaxy to the sun is approximately \(26,000\) light years. What is the circumference of the orbit of the sun around the galaxy in meters?
Water weighs approximately \(18\) grams per mole. If one mole is about \(6×10^{23}\) molecules, then approximate the weight of each molecule of water.
A gigabyte is \(1×10^{9}\) bytes and a megabyte is \(1×10^{6}\) bytes. If the average song in the MP3 format consumes about \(4.5\) megabytes of storage, then how many songs will fit on a \(4\)-gigabyte memory card?
1. \(2.7×10^{10}\)
3. \(6.3×10^{−11}\)
5. \(2.85×10^{−19}\)
9. \(1.888×10^{9}\)
11. \(5.15×10^{11}\)
13. About \(116\) people per square mile
15. \(81.2\)
17. \(1.807\)
19. \(6.67×10^{15}\) ants
21. \(3×10^{−23}\) grams
5.5: Dividing Polynomials
5.E: Review Exercises and Sample Exam | CommonCrawl |
Does municipal ownership affect audit fees?
Linus Axén1,
Torbjörn Tagesson1,
Denis Shcherbinin1,
Azra Custovic1 &
Anna Ojdanic1
Journal of Management and Governance volume 23, pages 693–713 (2019)Cite this article
This study analyses whether municipal ownership affects and determines audit fees. Our model of the determinants of audit fees was tested on data from 249 Swedish municipal and 240 private corporations within the real estate industry, thus extending the study of audit fees to hybrid organizations. The statistical analysis was followed up with interviews of five partners from five different audit firms. The result of the study shows that municipal corporations are paying significantly lower audit fees than equivalent private corporations. This finding is primarily explained by lower perceived business risk and by the fact that municipalities are able to push prices by coordinating procurements of audit services.
A large number of studies explain the determination of audit fees in public corporations (e.g. Simunic 1980; Holm and Thinggaard 2014; André et al. 2016), private corporations (e.g. Willekens and Achmadi 2003; Hope and Langli 2010; Sundgren and Svanström 2013) and public sector organizations (e.g., Baber 1983; Baber et al. 1987; Johnsen et al. 2004; Basioudis and Ellwood 2005a, b; Collin et al. 2017). However, in this study, we focus on a particular category of companies, namely municipal corporations. During the last decades, local governments have gradually reduced direct forms of management in favour of various forms of corporatisation, public–private partnership and contracting out (Agrento et al. 2010; Tagesson and Grossi 2012).
According to Collin et al. (2009), these corporations "are located in a twilight zone, being both private in one sense, acting according to the legislation of joint stock companies, and public in another sense, oriented towards fulfilling the needs of the municipal citizenry". Thus, municipal corporations are examples of hybrid organizations with a combination of public and private characteristics and objectives and they are subjected to demands from both public and private sectors (Thomasson 2009a), operating at the intersection of the market and the public sector (Grossi and Thomasson 2015). Despite a period of increased hybridity and interest in hybrid organizations, the phenomenon is still not very well researched and the literature remains sparsely spread across many academic disciplines (Billis 2010) this also applies to the audit fee research area (Hay 2013).
In line with previous research, we build on Simunic (1980) considering the audited organization as well as the audit market, this will be discussed in more detail in the methodology section. The aim of this study is to examine whether municipal ownership is a factor that affects and determines audit fees.
The remainder of the paper is structured as follows: in the next section we give a brief description of the institutional setting for municipally-owned corporations. The theory section follows with the derivation of hypotheses. The next sections describe the empirical method and the analysis. The article ends with discussion, conclusions and suggestions for further research.
Theory and hypothesis development
The demands facing hybrid organizations are often contradictory, which generates ambiguity (Kickert 2001). By having a multidimensional goal structure, hybrid organizations often need to handle conflicting demands related to, for example organizational effectiveness, profitability or different societal goals (Lindqvist 2013; Grossi et al. 2017). In addition to multidimensional goals, auditors as well as managers also have to consider and balance the demands from different, and sometimes heterogeneous, stakeholders (e.g. Jansson 2005; Calabró et al. 2013). That the corporations are part of a political context could lead to increased audit costs. Political conflicts (Deis et al. 1992) and competition between different political representatives and parties (Baber 1990; Cohen and Leventis 2013) raise demands for monitoring and auditing effort, implying increased audit fees.
The public viability and politicized environment may increase the reputation risk and audit effort for auditors (Redmayne et al. 2010; Cohen and Leventis 2013). In a Swedish context where the risk of litigation is perceived as low (Svanström 2013; Alexeyeva and Svanström 2015) auditors still have incentives to maintain high audit quality in order to avoid reputational losses (Skinner and Srinivasan 2012).Footnote 1 As a consequence of bad publicity (for example, related to a low-quality audit within a municipal company or a company failure) the public may lose the confidence, both in the auditor and in the financial reports of the company (Barton 2005). Regarding negative media exposure due to unethical behaviour, the hybridisation has reduced the ability for citizens to get access to public documents, as municipal corporations need to consider both public and civil law (Gissur O Erlingsson et al. 2008; Shaoul et al. 2012). According to Agrento et al. (2010) this can sometimes create conflicts, as the legislation for corporations and local governments are based on different presumptions. A lower degree of transparency increases the risk of corrupt behaviour and abuse of power (Linde and Erlingsson 2013). In the end this affects the auditor who is responsible for considering the risk of material misstatement due to error and fraud (DeZoort and Harrison 2016). Maintaining public confidence is essential as the audit process is largely non-transparent and it is impossible for a third party to assess the quality of a given audit (Hennes et al. 2014). A loss in reputation for the audit firm will impair the ability to keep hold of current clients and attract new ones (Defond and Zhang 2014). Due to a greater demand for audit quality, Skinner and Srinivasan (2012) find evidence that larger clients and those with growth opportunities are more likely to leave an audit firm that have suffered from a reputational loss. Regarding the procurement of audit service for municipal corporation audit, firms have incentives to consider reputational risk when they submit tenders. Since there are limited opportunities to adjust audit fees "after the fact" to cover reputational losses, auditors need to act preventively and incorporate potential losses into the audit fee (Simunic and Stein 1996).
In the Swedish context, the principle of openness may increase the reputation risk, as may the fact that the municipal auditors have a right to appoint a lay auditor that (in addition to the external auditor) will express his or her own opinion about the adequacy of internal controls. When the auditor is aware that a lay auditor will perform a partly parallel audit, the auditor will probably be more careful in order to avoid the risk of making mistakes which are then detected by the lay auditor. One specific area that certainly will be audited by all appointed auditors and require special attention is internal controls (SALAR 2017). Accuracy requires time, which may lead to increased audit costs. Consequently, the combination of public and private characteristics of hybrid organizations may lead to additional audit effort and thus increased audit fees.
However, according to Jensen et al. (2005), agency costs are sometimes perceived to be less prevalent in the municipal sector than in the private sector. The character of public property reduces the incentives for monitoring (Zimmerman 1977). Even though municipal corporations, in a sense, are private organizations that must comply with company law, accountability is claimed in a municipal context, a context where the degree of formal accountability in general is low (Knutsson et al. 2012). Reduced accountability within hybrid organizations (Billis 2010; André 2010) can partly be explained by the occurrence of different principal-agent relationships that are complex due to diverge expectations and demands from different stakeholders (Thomasson 2009b; Shaoul et al. 2012; Kankaanpää et al. 2014; Grossi and Thomasson 2015). An accountability gap thus exists (Sands 2006), which complicates the governance of the hybrid organization and creates an opportunity for managers and auditors to act in their own self-interest. A perceived low degree of accountability decreases the litigation risk and from a wealth maximization perspective the auditor will "perform an audit which will reduce the chance of a successful negligence suite to a level which is acceptable" (Sherer and Turley 1997 p. 60). Consequently, the combination of reduced accountability and a low litigation risk may lead to reduced audit effort and thus lower audit fees.
Regarding business risk, previous research (Bell et al. 2001; Niemi 2002; Kim and Fukukawa 2013) show that auditors respond to higher business risk either by increasing audit effort (increase in audit hours or use of more experienced staff) or by charging a risk premium (no increase in audit effort) which will cover potential losses. More specifically, client business risk is related to a wide spectrum of different factors (industry conditions, organizational structure and business processes) that all have the potential to affect the client´s ability to achieve its objectives (Erickson et al. 2000; Bell et al. 2008; Stanley 2011; Kim and Fukukawa 2013). Ultimately, the concept of client business risk is "associated with the entity´s survival" (Bell et al. 1997 p.15) both in the short and long term. Based on Markowitz (1952) and modern portfolio theory, it can be argued that municipal corporations have a higher business risk compared to private firms, as they are unable to allocate assets to other geographical areas or differentiate among different classes of property (Viezer 2000; Boverket 2017). Private corporations are more adaptive to changed market conditions as they to a greater extent acquire and sell properties (Boverket 2017). From an audit perspective, executed acquisitions would likely have an impact on the risk assessment procedure and create an increased demand for additional audit. (ISA, 315). However, considering that the municipal corporations have a financially strong owner with almost endless resources due to its taxation capacity (Chan 2003; Jones and Pendlebury 2004) the debt side of capital can be expected to attract little attention since there is a credible tradition that municipal corporations do not go bankrupt (Collin and Tagesson 2010). Hence, the business risk of the municipal corporations must be considered as low, and in the end result in lower audit fees. By focusing on the financial statements and the administration of the management (according to ISA and generally accepted auditing standards in Sweden) the external auditors are able to restrict their mission and crucial parts of the hybrid complexity will be handled by the lay auditors.
In sum; the political and hybrid characteristics of the municipal corporation indicate a potential reputational risk that may lead to increased audit fees. However, on the other hand the municipal ownership implies a low business risk which suggests decreased audit fees.
Given the lack of directional clarity, we state the hypothesis to be tested in null form. We propose:
Municipal ownership does not affect and determine audit fees.
Method and empirical setting
The Swedish setting
Swedish municipalities are quite autonomous regarding the direction and organization of local activities (Agrento et al. 2010). Thus, municipal activities can be organized and carried out through municipal corporations (ibid.). Besides direct management, corporatizations in the form of municipally owned joint stock corporations are the dominating organizational form of local public service providers in Sweden (Agrento et al. 2010). Today there are approximately 1600 joint stock corporations that are wholly or partly owned by one or more of the 290 Swedish municipalities. The balance sheet total of the municipality-owned enterprises amounted in 2014 to SEK 1163 billion with an equity-assets ratio of 21.6% (SCB 2014). Thus, a large part of the municipalities' wealth and total assets are controlled by these corporations (e.g. Tagesson and Grossi 2012). This can be explained by the fact that operation and ownership of municipal buildings, real estate and public housing very often are organized in municipal corporations (Collin et al. 2009). Anyhow, both in terms of number and size, municipal corporations constitute an important part of the Swedish audit market. They combine organisational characteristics from both the private and public sector (Grossi and Thomasson 2015) and are subject to both municipal and private law. In addition to the general legislation for limited corporations, including audit requirements, municipal corporations are also subject to certain rules of the Municipal Act (e.g. Haraldsson 2017). For example, (1) without explicit legal support, municipal corporations are forbidden to organize with the subject of making a profit, (2) the municipal auditors have the right to appoint a lay auditor in addition to the auditor who is responsible for corporate audit and (3) the corporations are, just like the municipalities, subject to the principle of openness, which means that citizens basically have the right to access all documents concerning the company.
Within wholly owned municipal corporations it is actually statutory, according to the Swedish Local Government Act (1991:900), to appoint at least one lay auditor. In municipal corporations, lay auditors are elected among, or in consultation with, the municipal auditors. This means that the assignment is carried out by or in close collaboration with the municipal auditors (SALAR 2017). The task of the lay auditor is to review whether the company works in accordance with the owner's (municipality's) intention and goals, whether the activity is effective and if the internal control is sufficient (Companies Act 2005; SALAR 2017). The lay auditor submits a separate report to the Annual General Meeting.
The empirical data in this study consist of 249 Swedish municipal corporations and 240 private corporations within the real estate and housing industry. In order to locate municipality-owned public housing corporations, the Swedish Association of Public Housing Companies (SABO) was contacted. SABO provided a list of 300 member corporations that are owned by municipalities and managed as limited corporations. Out of the initial 300 municipal corporations, 51 were excluded due to lack of data. All financial data connected to the 249 municipal corporations were collected through the annual reports for year 2012.
Contact information of the private housing corporations was gathered through the websites of the municipalities. Municipalities have a duty to disclose information about private housing corporations that operate within the municipality. In total, financial data of 240 private housing corporations were collected through both annual reports and a database called Retriever Business. Additionally, to the financial data, five interviews were conducted with partners from the Big-4 audit firms and Grant Thornton.
As mentioned in the introduction, we build our analysis on Simunic (1980) taking into consideration the audited organization as well as the audit market. Simunic´s basic audit pricing study is regarded as a seminal work and which has profoundly influenced subsequent research (Cobbin 2002; Hay et al. 2006). In order to examine the competitiveness within the audit market (price differences between Big 8 and non-Big 8 audit firms), Simunic developed an audit fee model that could be used to investigating the determinants of audit fees. The results of the study show that audit fees are significantly associated with a large number of different factors which essentially can be related to the size, risk and complexity of the auditee. Regarding audit market competition, Simunic does not find any significant results supporting a big-8 premium and monopoly pricing. The original audit fee model has been subject to a number of different modifications (new explanatory variables) and has been tested in a large number of different institutional environments (Cobbin 2002; Hay et al. 2006; Hay 2013). In a Nordic context, different modified versions of the original audit fee model have been used to determine the audit fees within private corporations (Hope and Langli 2010; Sundgren and Svanström 2013), public corporations (Zerni et al. 2012; Holm and Thinggaard 2014) and municipalities (Johnsen et al. 2004; Collin et al. 2017).
In addition to the statistical analysis, qualitative interviews have been conducted in order to get a better understanding of the studied phenomena. By using multiple data sources, access to more detailed empirical data was obtained that could be used to improve the validity of the results (Patton 2002; Eisenhardt 1989). The use of a sequential explanatory strategy made it possible to further interpret the statistical results and put forward alternative explanations that could be used to refine current theories (Creswell 2009). The interviews were semi-structured, implying that they were based on an interview guide that contained open-ended questions. In total, interviews were conducted with five partners from five different audit firms. More specifically, the respondents represented each of the Big-4 audit firms and Grant Thornton. The respondents were selected with regard to representativeness, previous experience and specific qualities (Alvesson 2011). As more than 80% of all corporations in the sample were audited by a Big-4 audit firm it was essential, with regard to representativeness, to mainly select respondents from those firms. The use of highly knowledgeable individuals (partners) were necessary as they have experience regarding pricing of audit services, both in municipal and private corporations. Regarding the analysis of the empirical data, the focus has been on the manifest statements, with emphasis on the actual meaning (e.g. Neuendorf 2002). The use of open-ended questions made it appropriate to use structural coding which directed the analysis to the specific research questions (Saldaña 2009). The aim of the interviews has been to obtain a better understanding and to further test the reliability of our statistical analysis (Falkman and Tagesson 2008).
Dependent variable
In accordance with Simunic (1980) and subsequent studies (Firth 1997; Niemi 2002; Thinggaard and Kiertzner 2008), the total audit fees were used as dependent variable. Following the majority of previous studies (Hay et al. 2006) total audit fees were transformed by using the natural logarithm (AFEE).
Independent variable
Given the aim of this study, the independent variable was operationalized as a dummy variable where 1 denotes a municipal ownership and 0 a private ownership (MUNICIPAL_OWN).
Control variables
Meta-analyses by Hay et al. (2006) and Hay (2013) emphasize that client size is the most important determinant of audit fees and included in almost all studies. The size of the corporation was operationalized by using total assets (ASSETS) and, like AFEE, transformed by using the natural logarithm.
Inventory and receivables are two types of assets that are associated with increased inherent risk due to large volumes and fraud opportunities (Firth 1997). Inventory and receivables were divided by total assets (INVREC) (Ahmed and Goyal 2005; Niemi 2005; Griffin et al. 2009).
Due to a large number of corporations with less than two subsidiaries the square root of the total number of subsidiaries was not used as a proxy for complexity. Instead, client complexity was operationalized as a dummy variable where corporations that have at least one subsidiary been coded 1, and corporations without any subsidiary 0 (SUBS). The possession of subsidiaries causes increased travel and coordination expenses for the auditor and obliges the auditor to learn more about the new organization and its operations (Firth 1997). The debt to equity ratio (DE) was used as a measure of leverage. Increased leverage is associated with higher risk of bankruptcy and thus also higher financial risk for the auditor (Nikkinen and Sahlström 2004). Another risk measure connected to the auditor is the occurrence of a modified audit opinion. Corporations that received a modified audit opinion were coded 1 and corporations with a clean opinion 0 (Francis and Stokes 1986) (OPINION).
Following previous studies by Niemi (2002) and Casterella et al. (2004), the occurrence of financial losses was operationalized with a dummy variable, where 1 denotes that the company has reported a negative net income during any of the last 2 years (LOSS). Corporations that possess a financial guarantee exclude the risk of bankruptcy and decrease the probability that an auditor will suffer financial losses. Corporations that obtain a financial guarantee were coded 1, and other corporations 0 (GUARANTEE). Audit firm tenure and possible low-balling (DeAngelo 1981) were measured by a dummy variable. Corporations that changed their signing audit firm during either of the last 2 years were coded 1 and if no change occurred were coded 0 (AUDCHA). Free cash flow was measured as operating income before depreciation minus taxes and interest payments (Nikkinen and Sahlström 2004). As in previous studies (Gul and Tsui 1998, 2001; Nikkinen and Sahlström 2004), free cash flow was divided by total assets (FCF).
Non-audit services were operationalized as a dummy variable where corporations that purchased non-audit services from their incumbent audit firm were coded 1, and if not 0 (NAS). In order to control for a potential fee premium, individual proxies were used for each of the Big-4 audit firms (KPMG, EY, DELOITTE, PWC), and other audit firms were merged into one category (OAF). PWC was used as reference variable (Table 1). Two separate dummy variables were used to separate municipal corporations and private corporations audited by PWC (PWC_MUNICIPAL, PWC_PRIVATE). Our main audit fee model has the following structure:
Table 1 Definitions of variables used in the regression model
$$\begin{aligned} AFEE = \alpha + \beta_{1} MUNICIPAL\_OWN + \beta_{2} ASSETS + \beta_{3} INVREC + \beta_{4} SUBS + \beta_{5} DE \hfill \\ + \;\beta_{6} OPINION + \beta_{7} LOSS + \beta_{8} GUARANTEE + \beta_{9} AUDCHA + \beta_{10} FCF \hfill \\ + \;\beta_{11} NAS + \beta_{12} KPMG + \beta_{13} EY + \beta_{14} DELOITTE + \beta_{15} OAF \hfill \\ + \beta_{16} PWC\_MUNICIPAL + \beta_{17} PWC\_PRIVATE + e \hfill \\ \end{aligned}$$
Analysis and findings
Findings from quantitative analysis
The descriptive statistics are presented in Table 2. The table includes descriptive statistics of all variables used in the audit fee models.
Table 2 Descriptive statistics (n = 489)
As shown in Table 2, the total sample of 489 corporations consists of 249 (50.9%) municipal and 240 (49.1%) private corporations. The mean of AFEE and ASSETS is 4.6 (12.8) with a standard deviation of 1.02 (1.57). 46% of all corporations have at least one subsidiary and 3% of the total assets consist of inventory and receivables. Only 1.4% of all corporations in the sample received a modified audit opinion during year 2012 and a little more than 50 corporations were secured by a financial guarantee. During the period 2011–2012, 24.1% of all corporations reported a negative net income and during the same period 5.5% or 27 corporations changed their signing audit firm.
In 2012, almost three quarters (74.4%) of all corporations purchased non-audit services from their incumbent audit firm. More than 35% of all corporations in the sample have PWC as their signing audit firm. A market share of more than 44% for municipal corporations and of 26% for private corporations makes PWC a market leader among all of the audit firms. EY has a market share of 21.7%, followed by other audit firms 18.4%, KPMG 17.4% and DELOITTE 6.7%.
As shown in Table 3, a correlation matrix was conducted in order to examine correlation between the variables in the model. The dependent variable AFEE has a strong positive correlation with ASSETS (.689) and a more moderate positive correlation to SUBS (.456) and NAS (0.211). Considerable correlation, close to 0.7, can be observed between some of the independent variables; to examine potential problems with multicollinearity, a variance inflation factor test was performed. All of the VIF-values are below 2.5 with a maximum VIF-value of 1.636 for model 1 and 2.173 for model 2. Low VIF-values decrease the likelihood of serious multicollinearity.
Table 3 Correlation table (Pearson correlations) n = 489
Table 4 presents the results from the different regression (ordinary least squares) models, where model 1 and 2 use MUNICIPAL_OWN as an independent variable. Both model 1 and 2 have an adjusted R2 value of 59.7%, significant F-statistics (0.000), 489 observations and VIF-values lower than 2.5. In model 1, each of the Big-4 audit firms was treated separately and other audit firms were merged into one category (OAF). PWC was used as a reference variable. In model 2, two new control variables were included in order to examine whether PWC was able to charge an audit fee premium both in municipal corporations and private corporations.
Table 4 Regression results
According to the results of both model 1 and model 2, we reject the null hypothesis that municipal ownership does not affect and determine audit fees. In both models, municipal corporations pay approximately 15% lower audit fees compared to private corporations. Of the control variables, ASSETS, INVREC, SUBS and NAS show a significant positive relationship with the audit fees on a 1% level and DE a significant negative relationship on a 1% level. Compared to the non-Big 4 audit firms, PWC charges an audit fee premium of nearly 25% and compared to KPMG 16%. It is evident that PWC charges higher audit fees compared to all other audit firms (except DELOITTE) in the sample. In model 2, we find that PWC, compared to the other audit firms, charges higher audit fees both within municipal corporations and private corporations.
In order to further examine the determinants of audit fees at a more disaggregate level, we separated the full sample with regard to municipal (model 3) and private corporations (model 4). As none of the municipal corporations received a modified audit opinion during 2013, model 3 does not include OPINION as a control variable. For a similar reason, model 4 lacks GUARANTEE as a control variable. A comparison of the adjusted R-square values shows that there exist large differences between model 3 and 4, the explanatory power of the audit fee model is noticeably higher for private corporations (67.9%) than municipal corporations (42.2%). Even with regard to the audit fee determinants there exist significant differences between the audit fee models. The regression results show that two of the variables, ASSETS and SUBS, have significant positive coefficients, both in model 3 and 4. However, in model 4, additionally three variables show a significant relationship with the audit fees, INVREC and NAS have a significant positive relationship with the audit fees and DE a significant negative relationship. In model 3, PWC manages to charge higher audit fees compared to KPMG, and in model 4, compared to the non-Big 4 audit firms.
Findings from the interviews
One of the interviewed partners explicitly pointed out that there is an increased reputation risk to accept the audit of a municipal corporation, since these corporations often are closely examined and scrutinized by mass media:
If it is a municipal corporation the audit firm can be very exposed. If something happens in a municipal corporation it will cause a lot of media attention in different ways. This attention will affect the audit firm negatively. Therefore, the audit firm needs to consider a risk premium which takes into account negative media exposure, even if it´s not directly connected to their assignment. (Partner 3)
However, four out of five partners claimed that the audit fees should be lower in municipal corporations due to a reduced business risk and procurements of audit services. Municipal corporations normally have a good financial position, high solidity and in addition they also have dividend restrictions. One of the partners pointed out that municipal corporations do not acquire and sell properties to the same extent as private corporations, which is of great importance in the assessment of the business risk:
Regarding purchase and sale of real estate, there is not as much business happening in the municipal corporations as in the private corporations. Thus, you do not need to put as much focus on valuation issues in the municipal corporations. (Partner 1)
Regarding reputation risk he pointed out that during an audit you always select and evaluate samples of risk items for compliance and testing purposes. If you then find errors, this might imply increase in audit effort and indirectly higher audit fees. Regarding municipal corporations he especially pointed at the media's interest for investigating and report on representation and study visits. Even if an error is not material per se, the political nature of the auditee can justify and motivate an extended review.
Another explanation why audit fees are lower in municipal corporations than in private corporations, which emerged during the interviews, was that municipalities are able to push prices by coordinating procurements for the municipality and the municipal corporations:
We have a somewhat standardized procedure regarding what we need to spend time on when we perform an audit. We evaluate how many members, specialists and experts that need to be a part of the team to carry out the audit assignment. […] Then we have to adapt the tender considering the market conditions. Based on all this we decide upon an indicative price. We are not allowed to offer a fixed price by law. (Partner 3)
If an audit firm win the tender both regarding the assistance of the municipal auditors (e.g. lay auditors) and the corporation, it is easier to coordinate the work between lay auditor and external auditor. According to the respondents this is the most common scenario and the interviews revealed that the external audit and the lay man audit usually are coordinated.
Normally I meet the layman auditor and his assistant at least twice a year to coordinate the work. Usually, the auditor who assists the layman auditor is from the same audit firm as I, which facilitates the work. (Partner 1)
The aim of the study was to examine whether municipal ownership is a factor that affects and determines audit fees. Our analyses clearly show that municipal housing corporations pay significantly lower (approximately 15%) audit fees than privately owned corporations in the same industry. One conclusion, based on this result, is that the audit firms price business risk and litigation risk higher than the reputation risk associated with the public viability and politicized environment of municipal corporations. Based on the results of this study and existing theory, lower audit fees within municipal corporations may be explained by four important aspects. First, a reduced number of property transactions will likely reduce the business risk and lead to lower audit effort. In contrast, executed acquisitions will lead to increased audit effort as more work is required and performed by senior managers and specialists. The second and third aspect concern reduced accountability within municipal corporations (Grossi and Thomasson 2015), and a low litigation risk which imply an opportunity for auditors to act in their own self-interest. The occurrence of an accountability gap thus increases the likelihood that self-interested auditors will limit the scope of their work and perform an audit that is below average standard. Fourth, our interviews reveal that municipal corporations are able to push down audit fees as a result of the procurement process. This is further supported by Tagesson et al. (2015) who find evidence that price is a predominant criterion when municipalities procure audit services. Thus, it is possible that audit firms use a low-balling strategy to gain market shares or to retain important clients. Regarding reputation risk, the interviews indicated that it is a factor that always needs to be considered when the auditor selects items for evaluation and testing. However, if an error is detected in a municipal corporation, the auditor has to consider not only materiality but also the political implications.
As an industry leader, PWC manages to charge an audit fee premium of 15–25% compared to the majority of the audit firms used in the sample; a division between municipal corporations and private corporations shows that PWC exhibits a higher fee premium for private corporations. From a neoclassical perspective, a fee premium can be explained by superior audit quality, which justifies higher audit fees. Another plausible explanation is that PWC holds a unique market position with a large distance to its closest competitors (Numan and Willekens 2012). Insignificant results regarding the possession of a financial guarantee can be explained by the fact that it is very rare that a Swedish municipal corporation goes bankrupt. Regarding municipal corporations, the signing auditor is exposed to possible reputation loss rather than high financial risk.
Regarding the results of the different audit fee models, there exists a general explanatory power which applies to both municipal and private corporations. However, the audit fee model seems to be more adapted to private sector organizations as the adjusted R2 is significantly higher for private corporations (67.9%) compared to municipal corporations (42.2%). Previous research (Hope and Langli 2010; Sundgren and Svanström 2013) show that the explanatory power of the audit fee model is reduced when shifting focus from public (80–85%) to private (50–55%) corporations. These results indicate that a well specified model for public corporations is reduced in explanatory power when transformed to other contexts. For example, considering municipal corporations the audit fee model could probably be developed further and adapted in order to consider municipal corporations' conditions and characteristics of hybrid organization. Thus, the present study has some limitations and identifies some suggestions for future research. Our statistical model lacks a theory of interest group intermediation. As indicated by the interviews, a reasonable assumption is that interest groups, such as the media, are directed by their own interests and could even be assumed to influence the risk assessment and pricing of audit services in hybrid organizations (Redmayne et al. 2010). Another factor that emerged from the supplementary interviews with the partners from the audit firms was the municipal negotiation power due to coordinated procurements for the municipality and the municipal corporations. In a model designed to explain the audit fees in municipal corporations, this factor may be operationalized by the variables (1) total number of corporations in the municipal group and (2) the municipal budget for audit services.
In cases involving the Supreme Court in Sweden, the claims against the auditor have to be solid if plaintiffs should be able to get compensation of damage (Aspholm 2002). In addition to the Supreme Court, out-of-court settlements and other court cases may also affect the auditors' behavior (Zerni et al. 2012).
Agrento, D., Grossi, G., Tagesson, T., & Collin, S.-O. (2010). The 'externalisation' of local public service delivery: experience in Italy and Sweden. International Journal of Public Policy, 5(1), 41–56.
Ahmed, K., & Goyal, M. (2005). A comparative study of pricing of audit services in emerging economies. International Journal of Auditing, 9(2), 103–116.
Alexeyeva, I., & Svanström, T. (2015). The impact of the global financial crisis on audit and non-audit fees. Managerial Auditing Journal, 30, 302–321.
Alvesson, M. (2011). Intervjuer—Genomförande, tolkning och reflexivitet. Malmö: Liber.
André, R. (2010). Assessing the accountability of government-sponsored enterprises and quangos. Journal of Business Ethics, 97(2), 271–289.
André, P., Broye, G., Pong, C., & Schatt, A. (2016). Are joint audits associated with higher audit fees? European Accounting Review, 25(2), 245–274.
Aspholm, I. (2002). Rättsekonomisk analys av revisors skadeståndsansvar i Norden [Legal analysis of the auditor's liability in the Nordic countries]. Helsinki: Hanken School of Economics, Department of Accounting and Commercial Law, Commercial Law.
Baber, W. R. (1983). Toward understanding the role of auditing in the public sector. Journal of Accounting and Economics, 5(3), 213–227.
Baber, W. R. (1990). Toward a framework for evaluating the role of accounting and auditing in political markets: The influence of political competition. Journal of Accounting and Public Policy, 9(1), 57–93.
Baber, W. R., Brooks, E. H., & Ricks, W. E. (1987). An empirical investigation of the market for audit services in the public sector. Journal of Accounting Research, 25(2), 293–305.
Barton, J. (2005). Who cares about auditor reputation? Contemporary Accounting Research, 22(3), 549–586.
Basioudis, I. G., & Ellwood, S. (2005a). An empirical investigation of price competition and industry specialisation in NHS audit services. Financial Accountability and Management, 21(2), 219–247.
Basioudis, I. G., & Ellwood, S. (2005b). External audit in the national health service in England and Wales: A study of an oversight body's control of auditor remuneration. Journal of Accounting and Public Policy, 24(3), 207–241.
Bell, T. B., Doogar, R., & Solomon, I. (2008). Audit labor usage and fees under business risk auditing. Journal of Accounting Research, 46(4), 729–760.
Bell, T. B., Landsman, W. R., & Shackelford, D. A. (2001). Auditors' perceived business risk and audit fees: Analysis and evidence. Journal of Accounting Research, 39(1), 35–43.
Bell, T. B., Marris, F. O., & Solomon, I. (1997). Auditing organizations through a strategic systems lens. Montvale, NJ: KPMG LLP.
Billis, D. (2010). Towards a theory of hybrid organizations. In D. Billis (Ed.), Hybrid organizations and the third sector: challenges for practice, theory and policy. Basingstoke, Hampshire, UK: Palgrave Macmillan.
Boverket. (2017). Allmännyttiga kommunala bostadsaktiebolag—utvärdering av tillämpningen av gällande lagstiftning, 2017, Boverket.
Calabró, A., Torchia, M., & Ranalli, F. (2013). Ownership and control in local public utilities: The Italian case. Journal of Management and Governance, 17(4), 835–862.
Casterella, J. R., Francis, J. R., Lewis, B. L., & Walker, P. L. (2004). Auditor industry specialization, client bargaining power, and audit pricing, auditing. A Journal of Practice and Theory, 23(1), 123–140.
Chan, J. L. (2003). Government accounting: an assessment of theory, purposes and standards. Public Money and Management, 23(1), 13–20.
Cobbin, P. E. (2002). International dimensions of the audit fee determinants literature. International Journal of Auditing, 6, 53–77.
Cohen, S., & Leventis, S. (2013). An empirical investigation of audit pricing in the public sector: The case of Greek LGOs. Financial Accountability & Management, 29(1), 74–98.
Collin, S.-O., Haraldsson, M., Tagesson, T., & Blank, V. (2017). Explaining municipal audit costs in Sweden: Reconsidering the political environment, the municipal organisation and the audit market. Financial Accountability & Management, 33, 391–405.
Collin, S.-O., & Tagesson, T. (2010). Governance strategies in local government: a study of governance of municipal corporations in a Swedish municipality. International Journal of Public Policy., 5(4), 373–389.
Collin, S.-O., Tagesson, T., Andersson, A., Cato, J., & Hansson, K. (2009). Explaining the choice of accounting standards in municipal corporations. Critical Perspectives on Accounting, 20(2), 141–174.
Creswell, J. W. (2009). Research design. Qualitative, quantitative and mixed methods approaches. Thousand Oaks: Sage.
DeAngelo, L. E. (1981). Auditor independence, "low balling", and disclosure regulation. Journal of Accounting and Economics, 3(2), 113–127.
DeFond, M., & Zhang, J. (2014). A review of archival auditing research. Journal of Accounting and Economics, 58(2), 275–326.
Deis, D. R., & Giroux, G. A. (1992). Determinants of audit quality in the public sector. The Accounting Review, 67(3), 462–479.
DeZoort, F. T., & Harrison, P. D. (2016). Understanding auditors' sense of responsibility for detecting fraud within organizations. Journal of Business Ethics. https://doi.org/10.1007/s10551-016-3064-3.
Eisenhardt, K. M. (1989). Building theories from case study research. The Academy of Management Review, 14(4), 532–550.
Erickson, M., Mayhew, B. W., & Felix, W. L. (2000). 'Why do audit fail?' Evidence from Lincoln savings and loan. Journal of Accounting Research, 38(1), 165–194.
Erlingsson, G. Ó., Bergh, A., & Sjölin, M. (2008). Public corruption in Swedish municipalities—Trouble looming on the horizon? Local Government Studies, 34(5), 585–603.
Falkman, P., & Tagesson, T. (2008). Accrual accounting does not necessarily mean accrual accounting: Factors that counteract compliance with accounting standards in Swedish municipal accounting. Scandinavian Journal of Management, 24(3), 271–283.
Firth, M. (1997). The provision of non-audit services and the pricing of audit fees. Journal of Business Finance and Accounting, 24(3), 511–525.
Francis, J. R., & Stokes, D. J. (1986). Audit prices, product differentiation, and scale economics: Further evidence from the Australian market. Journal of Accounting Research, 24(2), 283–293.
Griffin, P. A., Lont, D. H., & Sun, Y. (2009). Governance regulatory changes, international financial reporting standards adoption, and New Zealand audit and non-audit fees: Empirical evidence. Accounting and Finance, 49(4), 697–724.
Grossi, G., Reichard, C., Thomasson, A., & Vakkuri, J. (2017). Editorial. Public Money & Management, 37(6), 379–386.
Grossi, G., & Thomasson, A. (2015). Bridging the accountability gap in hybrid organizations: The case of Copenhagen Malmö Port. International Review of Administrative Science, 81(3), 604–620.
Gul, F. A., & Tsui, J. S. L. (1998). A test of the free cash flow and debt monitoring hypotheses: Evidence from auditing pricing. Journal of Accounting and Economics, 2(24), 219–237.
Gul, F. A., & Tsui, J. S. L. (2001). Free cash flow, debt monitoring, and audit pricing: Further evidence on the role of director equity ownership. Auditing: A Journal of Practice and Theory, 2(20), 71–84.
Haraldsson, M. (2017). When revenues are not revenues: The influence of municipal governance on revenue recognition within Swedish municipal waste management. Local Government Studies, 43(4), 668–689.
Hay, D. (2013). Further evidence from meta-analysis of audit fee research. International Journal of Auditing, 17(2), 162–176.
Hay, D. C., Knechel, W. R., & Wong, N. (2006). Audit fees: A meta-analysis of the effect of supply and demand attributes. Contemporary Accounting Research, 23(1), 141–191.
Hennes, K. M., Leone, A. J., & Miller, B. P. (2014). Determinants and market consequences of auditor dismissals after accounting restatements. The Accounting Review, 89(3), 1051–1082.
Holm, C., & Thinggaard, F. (2014). Leaving a joint audit system: Conditional fee reductions. Managerial Auditing Journal, 29(2), 131–152.
Hope, O., & Langli, J. (2010). Auditor independence in a private firm and low litigation risk setting. The Accounting Review, 85(2), 573–605.
International Auditing and Assurance Standards Board. (2009). International Standard on Auditing (ISA) No. 315. Identifying and assessing the risks of material misstatement through understanding the entity and its environment. New York: IAASB.
Jansson, E. (2005). The stakeholder model: The influence of the ownership and governance structure. Journal of Business Ethics, 53(1), 1–13.
Jensen, K. L., & Payne, J. L. (2005). Audit procurement: Managing audit quality and audit fees in response to agency costs. Auditing: A Journal of Practice and Theory, 24(2), 27–48.
Johnsen, Å., Meklin, P., Oulasvirta, L., & Vakkuri, J. (2004). Governance structures and contracting out municipal auditing in Finland and Norway. Financial Accountability and Management, 20(4), 445–477.
Jones, R., & Pendlebury, M. (2004). A theory of the published accounts of local authorities. Financial Accountability & Management., 20(3), 305–325.
Kankaanpää, J., Oulasvirta, L., & Wacker, J. (2014). Steering and monitoring model of state-owned enterprises. International Journal of Public Administration, 37(7), 409–423.
Kickert, W. J. M. (2001). Public management of hybrid organizations: Governance of quasi-autonomous executive agencies. International Public Management Journal, 4(2), 135–150.
Kim, H., & Fukukawa, H. (2013). Japan's Big 3 firms' response to clients' business risk: Greater audit effort or higher audit fees? International Journal of Auditing, 17(2), 190–212.
Knutsson, H., Ramberg, U., & Tagesson, T. (2012). Benchmarking through municipal benchmarking networks: Improvement or leveling of performance? Public Performance and Management Review, 36(1), 102–123.
Linde, J., & Erlingsson, G. Ó. (2013). The eroding effect of corruption on system support in Sweden. Governance, 26(4), 585–603.
Lindqvist, K. (2013). Hybrid governance: The case of household solid waste management in Sweden. Public Organizational Review, 13, 143–154.
Markowitz, H. (1952). Portfolio selection. The journal of finance, 7, 77–91.
Neuendorf, K. (2002). The content analysis guide book, Thousand Oaks. California: Sage Publications Inc.
Niemi, L. (2002). Do firms pay for audit risk? Evidence on risk premiums in audit fees after direct control for audit effort. International Journal of Auditing, 6(1), 37–51.
Niemi, L. (2005). Audit effort and fees under concentrated client ownership: Evidence from four international audit firms. The International journal of Accounting, 40(4), 303–323.
Nikkinen, J., & Sahlström, P. (2004). Does agency theory provide a general framework for audit pricing? International Journal of Auditing, 3(8), 253–262.
Numan, W., & Willekens, M. (2012). An empirical test of spatial competition in the audit market. Journal of Accounting and Economics, 53(1–2), 450–465.
Patton, M. Q. (2002). Qualitative research & evaluation methods. Thousand Oaks: California, Sage publications Inc.
Redmayne, N. B., Bradbury, M. E., & Cahan, S. F. (2010). The effect of political visibility on audit effort and audit pricing. Accounting and Finance, 50, 921–939.
SALAR (Swedish Association of Local Authorities and Regions) (2017). https://skl.se/demokratiledningstyrning/revision/lekmannarevisionrevisionikommunalaforetag.1657.html (Accessed: August 25, 2017).
Saldaña, J. (2009). The coding manual for qualitative researchers, Thousand Oaks. California: Sage Publications Inc.
Sands, V. (2006). The right to know and obligation to provide: Public–private partnerships, public knowledge, public accountability, public disenfranchisement and prison cases. UNSW Law Journal, 29(3), 334–341.
SCB [Statistics Sweden] (2014). Offentligt ägda företag 2014 [Publicly owned enterprises 2014]. Stockholm: SCB www.scb.se/Statistik/OE/OE0108/2014A01/OE0108_2014A01_SM_OE27SM1501. Pdf (Accessed: April 19, 2018).
Shaoul, J., Strafford, A., & Stapleton, P. (2012). Accountability and corporate governance of public private partnerships. Critical Perspectives on Accounting, 23, 213–229.
Sherer, M., & Turley, S. (Eds.). (1997). Current issues in auditing (3rd ed.). London: Paul Chapman.
Simunic, D. A. (1980). The pricing of audit services: Theory and evidence. Journal of Accounting Research, 18(1), 161–190.
Simunic, D. A., & Stein, M. (1996). The impact of litigation risk on audit pricing: A review of the economics and the evidence. Auditing: A Journal of Practice and Theory, 15(2), 119–134.
Skinner, D., & Srinivasan, S. (2012). Audit quality and auditor reputation: Evidence from Japan. The Accounting Review, 87(5), 1737–1765.
Stanley, J. D. (2011). Is the audit fee disclosure a leading indicator of clients' business risk? Auditing: A Journal of Practice and Theory, 30(3), 157–179.
Sundgren, S., & Svanström, T. (2013). Audit office size, audit quality and audit pricing: evidence from small- and medium-sized enterprises. Accounting and Business Research, 43(1), 31–55.
Svanström, T. (2013). Non-audit services and audit quality: Evidence from private firms. European Accounting Review, 22(2), 337–366.
Tagesson, T., Glinatsi, N., & Prahl, M. (2015). Procurement of audit services in the municipal sector: The impact of competition. Public Money & Management, 35(4), 273–280.
Tagesson, T., & Grossi, G. (2012). The materiality of consolidated reporting—An alternative approach to IPSASB. International Journal of Public Sector Performance Management, 2(1), 81–95.
The Swedish Companies Act 2005:551. available at https://www.riksdagen.se/sv/dokument-lagar/dokument/svensk-forfattningssamling/aktiebolagslag-2005551_sfs-2005-551(Accessed: December 6, 2017).
The Swedish Local Government Act 1991:900. available at http://www.government.se/49b736/contentassets/9577b5121e2f4984ac65ef97ee79f012/the-swedish-local-government-act (Accessed: December 7, 2017).
Thinggaard, F., & Kiertzner, L. (2008). Determinants of audit fees: Evidence from a small capital market with a joint audit requirement. International Journal of Auditing, 12(2), 141–158.
Thomasson, A. (2009a). Exploring the ambiguity of hybrid organisations: A stakeholder approach. Financial Accountability & Management, 25(3), 353–366.
Thomasson, A. (2009b). Navigating in the landscape of ambiguity: A stakeholder approach to the governance and management of hybrid organizations. Lund: Lund University Press.
Viezer, T. W. (2000). Evaluating "within real estate" diversification strategies. Journal of Real Estate Portfolio Management, 6(1), 75–95.
Willekens, M., & Achmadi, C. (2003). Pricing and supplier concentration in the private client segment on the audit market: Market power or concentration. International Journal of Accounting, 38(4), 431–455.
Zerni, M., Haapamäki, E., Järvinen, T., & Niemi, L. (2012). Do joint audits improve audit quality? Evidence from voluntary joint audits. European Accounting Review, 21(4), 731–765.
Zimmerman, J. L. (1977). The municipal accounting maze: An analysis of political incentives. Journal of Accounting Research, 15, 107–144.
Department of Management and Engineering, Business Administration, Linköping University, 581 83, Linköping, Sweden
Linus Axén, Torbjörn Tagesson, Denis Shcherbinin, Azra Custovic & Anna Ojdanic
Linus Axén
Torbjörn Tagesson
Denis Shcherbinin
Azra Custovic
Anna Ojdanic
Correspondence to Torbjörn Tagesson.
Axén, L., Tagesson, T., Shcherbinin, D. et al. Does municipal ownership affect audit fees?. J Manag Gov 23, 693–713 (2019). https://doi.org/10.1007/s10997-018-9438-4
Issue Date: 15 September 2019
Audit fees
Business risk
Hybrid organizations
Municipal ownership | CommonCrawl |
View all Nature Research journals
Optoplasmonic characterisation of reversible disulfide interactions at single thiol sites in the attomolar regime
Serge Vincent1,
Sivaraman Subramanian1 &
Frank Vollmer1
Nature Communications volume 11, Article number: 2043 (2020) Cite this article
Characterization and analytical techniques
A Publisher Correction to this article was published on 09 June 2020
This article has been updated
Probing individual chemical reactions is key to mapping reaction pathways. Trace analysis of sub-kDa reactants and products is obfuscated by labels, however, as reaction kinetics are inevitably perturbed. The thiol-disulfide exchange reaction is of specific interest as it has many applications in nanotechnology and in nature. Redox cycling of single thiols and disulfides has been unresolvable due to a number of technological limitations, such as an inability to discriminate the leaving group. Here, we demonstrate detection of single-molecule thiol-disulfide exchange using a label-free optoplasmonic sensor. We quantify repeated reactions between sub-kDa thiolated species in real time and at concentrations down to 100's of attomolar. A unique sensing modality is featured in our measurements, enabling the observation of single disulfide reaction kinetics and pathways on a plasmonic nanoparticle surface. Our technique paves the way towards characterising molecules in terms of their charge, oxidation state, and chirality via optoplasmonics.
Access to single-molecule reactions to determine the state of participating species and their reaction mechanisms remains a significant technological challenge. The application of fluorescent optical methods to investigate a single molecule's reaction pathway is often non-trivial. Sophisticated fluorescent labelling may not be available, while the temporal resolution is limited by photobleaching and transit times1,2. Monitoring reactions between molecules that weigh less than 1 kDa is further complicated by labels, as adducts can have severely altered reaction kinetics. Non-invasive optical techniques for studying the nanochemistry of single molecules have thus been elusive.
Thiol and disulfide exchange reactions are particularly relevant to the field of nanotechnology3,4. The reversibility of the disulfide bond has, for example, paved the way to realising molecular walkers and motors5,6. Bottom-up thiol self-assembled monolayers have shown potential as building blocks for sensors and nanostructuring7. The precise attachment/detachment of thiolated DNA origami has even extended to the movement of plasmonic nanoparticles (NPs) along an engineered track8. In nature, disulfide bonds are a fulcrum for cell biochemistry. Reactions that form these links usually occur post-translation, stabilising folding and providing structure for a number of proteins9,10,11. The cell regularly controls disulfide bonds between thiol groups, alternately guiding species through reduction and oxidation12. Redox potentials and oxidative stress in this context are reflected in the relative concentrations of thiols and disulfides13.
Thiol/disulfide equilibria can be quantified in bulk, although often at the expense of high kinetic reactivity and the need for fluorescent or absorptive reagents to measure the exchange14. One such approach is an enzymatic recycling assay with 5-thio-2-nitrobenzoic acid absorbers capable of detecting thiols and disulfides down to 100's of picomolar concentrations15. This trades off quenching of thiol oxidation and exchange with the optimisation of reaction rates and the disruption of the thiol/disulfide equilibrium. As a disulfide bridge consists of two sulfur atoms that can interact with a thiolate (i.e. the conjugate base of a thiol), disulfide exchange is fundamentally intricate and the reaction branches for single molecules have yet to be fully characterised in the literature. Distinguishing leaving groups through a sensing element has so far been unachievable.
State-of-the-art sensors capable of transducing single-molecule interactions into optical16,17,18, mechanical19,20,21, electrical22,23,24, or thermal25 signals continue to emerge. Here we employ a label-free optoplasmonic system26 that has the specific advantage of detecting individual disulfide interactions in solution. Due to the hybridisation between an optical whispering-gallery mode (WGM) resonator and localised surface plasmon (LSP) resonance of a NP, perturbations to an LSP are observed through readout of a WGM coupled to it27,28,29. One strategy we propose is to immobilise thiolates on a gold NP surface with a separate functional group. Following selective covalent binding, immobilised thiolates may participate in redox reactions while under non-destructive probing. Reactions between sub-kDa reactants are monitored in real time and at concentrations as low as 100's of attomolar, hence isolating for the disulfide chemistry of single molecules in vitro. Such reactions frequently result in abrupt changes in hybrid LSP-WGM resonance linewidth/lifetime—a surprising phenomenon that was considered unresolvable by WGM mode broadening or splitting30,31,32. We clarify in this study that disulfide linkages to bound thiolate receptors can exclusively affect the hybrid LSP-WGM resonance linewidth, beyond a description via an unresolved mode split. Each linewidth transition per exchange also assigns a status to the leaving group. Our data suggests a sensing modality for inferring kinetics and chains of single disulfide reactions in proximity to a plasmonic NP, paving the way towards assessing molecular charge, oxidation, and chirality states on an integrated platform.
Experimental scheme
A gold NP surface serves as an effective detection area for biomolecular characterisation on an optoplasmonic sensor. Light field localisation and nanoscale mode volumes at the NP hotspots enable sensitivity to surface modification, wherein covalent bonding to the NP restricts the total number of binding sites. Previously, thiol and amine based immobilisation has been explored on our optoplasmonic sensor33. Under particular pH conditions that dictate the molecular charge of the analyte, thiol and amine functional groups were reported to bind to different facets of gold NPs34,35,36,37. For thiols, the binding preference is in the (111) and (100) planes of a gold surface which are present in an ordered crystal lattice. For amines, binding preferentially takes place at uncoordinated gold adatoms. Measurements from33 showed an approximate 2 orders of magnitude larger number of binding sites for thiols compared to amines on gold nanorods (NRs), demonstrating variable selectivity depending on surface regularity. If molecular charge is controlled and the NR surfaces are appropriately deformed, conditions can be reached where molecules containing both amine and thiol groups can predominantly bind onto gold via amine to create recognised thiolates38. These nucleophiles may attack disulfide bonds in molecules that diffuse to them. Reducing agents introduced in solution, such as tris(2-carboxyethyl)phosphine (TCEP), can then reduce bound disulfides and complete a redox cycle. This pathway establishes cyclical reactions near the NP surface to be analysed statistically.
The LSP resonance of a plasmonic NP can be weakly coupled to a WGM resonance of a dielectric microcavity. Through this coupling, molecules that successfully perturb the gold NP surface can be detected as shifts in a LSP-WGM resonance. Light coupled in and out of the hybrid system allows for evaluation of gold NP perturbations, i.e. by laser frequency sweeping across the LSP-WGM resonance and spectrally resolving the resonant lineshape of the transmitted light. In our setup we excite WGMs in a silica microsphere, with diameters in the range of 70–90 µm, using a tuneable external cavity laser with 642-nm central wavelength. The laser beam is focused onto a prism surface to excite WGMs by frustrated total internal reflection. With a sweep frequency of 50 Hz, the transmission spectrum is acquired through photodetection at the output arm every 20 ms and a Lorentzian-shaped resonance is tracked (Fig. 1a, b). The evanescent field at the periphery of the microcavity is subsequently enhanced and localised in the vicinity of bound, LSP-resonant gold NRs. The cetyltrimethylammonium bromide-coated NRs have a 10-nm diameter and 24-nm length with longitudinal LSP resonance at \(\lambda _0 =\) 650 m. In the event of molecules interacting with the gold NR, the LSP-WGM lineshape position \(\lambda _{{\mathrm{Res}}}\) and/or full width at half maximum \(\kappa\) will vary. Discrete jumps in these parameters may be measured in the time domain and are indicative of molecular bond formation with gold. Groupings of signal fluctuations exceeding 3σ from transient arrival/departure can also arise (Fig. 1c), where σ is the standard deviation of the noise derived from a representative 20-s trace segment. The resonance shifts of these signal packets are compiled for a series of analyte concentrations to confirm Poissonian statistics and first-order reaction rates (Fig. 1d). An extrapolation error exists in Fig. 1d given the chosen concentration range yet the event rate is most closely linear with concentration. Despite the negligible scattering and absorption cross-section of a single molecule, the ultrahigh-quality-factor WGM and its back-action on the perturbed LSP acts as a channel to sense loss changes intrinsic to or induced by a gold NP antenna. NP absorption spectroscopy by means of optoplasmonics39 provides groundwork for such a modality, as the absorption cross-section change in a NP due to surface reactions may become detectable. We affirm that signal traces can exhibit (1) simultaneous shifts in resonant wavelength, linewidth, and resolved mode splitting30,32 and (2) exclusive linewidth variation when single molecules diffuse within the LSP evanescent field decay length of the NP. Note here that the spectral resolution of our system is set by the laser frequency noise.
Fig. 1: Optoplasmonic sensor setup and quantification of adsorbing d-cysteine.
a Scheme for LSP-WGM based sensing. A beam emitted from a tuneable laser source, with central wavelength of 642 nm, is focused onto a prism face to evanescently couple to a microspherical WGM cavity. The WGM excites the LSPs of Au NRs on the cavity surface and the hybrid system's transmission spectrum is acquired at the output arm of the setup. d-cysteine (d-Cys) analytes have carboxyl, thiol, and amine groups. b Sensing through tracking perturbations of the Lorentzian resonance extremum in the transmission spectrum. The resonant wavelength \(\lambda _{{\mathrm{Res}}}\) and linewidth \(\kappa\) that define the quality factor \(Q = \lambda _{{\mathrm{Res}}}/\kappa\) are shown in the subfigure, as is unresolved mode splitting due to scattering. c Single-molecule time-domain signatures with signal value \({\mathrm{\Delta }}\lambda _{{\mathrm{Res}}}\) and duration \({\mathrm{\Delta }}\tau\) from the transit of d-Cys near Au NRs. The solvent used is 0.02% sodium dodecyl sulfate (SDS) in deionised water. d Linear dependence of event frequency on analyte concentration that suggests first-order rates. Events conform to a Poisson process (Supplementary Fig. 1).
Disulfide reaction mechanism and statistical analysis
Loading of the gold NR surface with thiolate linkers requires a set of restrictions on the solvent environment at room temperature. To promote amine-gold bonds, we use a buffer at a pH that is above an aminothiol's logarithmic acid dissociation constants \({\mathrm{pKa}}_{{\mathrm{SH}}}\) and \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). Within this balance, anionic species with negatively charged \(S^ -\) and neutral \({\mathrm{NH}}_2\) groups will dominate as per the Henderson–Hasselbalch equation40. A molecule must first reach the gold surface by overcoming Debye screening from surface charges41, e.g. from the gold NR's coating and pre-functionalisation of the glass microcavity. Such electrostatic repulsive forces can be reduced by electrolyte ions in substantial excess of the molecules under study. Analogous to raising the melting temperature of DNA from ambient conditions by increasing the salt concentration, the arrival rate of molecules to detection sites plateaus when the salt concentration is on the order of 1 M. Due to indiscriminate attachment of gold NRs onto the glass microcavity in steps preceding single-molecule measurements (Supplementary Fig. 2a), molecules in the medium should also be replenished to account for capture by NRs outside of the WGM's evanescent field (i.e. those that do not contribute to LSP-WGM hybridisation). Overall, these factors necessitate high electrolyte concentrations and recurring injection of analyte into a buffer of pH > \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\) to attain a sufficient reaction rate in the subfemtomolar regime.
The aminothiol linkers of interest for our experiments are chemically simple amino acids or pharmaceuticals with minimal side chains. For chiral studies, d- and l-cysteine (\({\mathrm{pKa}}_{{\mathrm{SH}}}\) = 8.33 and \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\) = 10.7842) are good candidates as they contain a carboxyl group that does not interfere with disulfide reactions. Nevertheless, for simplicity, we began with cysteamine (\({\mathrm{pKa}}_{{\mathrm{SH}}}\) = 8.19 and \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\) = 10.7543) as it is a stable aminothiol that excludes any side chains. The cysteamine's amine group favourably binds to our optoplasmonic sensor in a sodium carbonate–bicarbonate buffer at a pH slightly above 10.75, with 1 M sodium chloride (Fig. 2a, b). Typical signal patterns in Fig. 2a for amine-gold binding are discontinuous steps in both \(\lambda _{{\mathrm{Res}}}\) and \(\kappa\) on the order of 1 to 10 fm, with monotonic redshifts in \(\lambda _{{\mathrm{Res}}}\). Signal magnitude and direction depends on variables such as the position and orientation of the gold NR detector on the microcavity26, the detection site on the NR itself, and the analyte's molecular mass/polarisability44. As time evolves and analyte is steadily supplied, the binding sites become occupied and event rate decreases (Fig. 2c). These independent shifts in \(\lambda _{{\mathrm{Res}}}\) and \(\kappa\) are collected in Fig. 2d to showcase non-monotonic linewidth narrowing and broadening once single molecules bind. This is an unconventional result as there are equally likely signs for \({\mathrm{\Delta }}\kappa\) without apparent proportionality to \({\mathrm{\Delta }}\lambda _{{\mathrm{Res}}}\). A singlet of an unresolved mode split that would generate the linewidth shift is thus unsubstantiated.
Fig. 2: Single cysteamine binding to gold NRs via amine at subfemtomolar concentration.
a Discrete signals in the LSP-WGM resonance trace from covalent bonding of the \({\mathrm{NH}}_2\) ligands to Au in a basic buffer. b Conceptual diagram of the cysteamine surface reaction. Cysteamine, with its thiol and amine groups, forms an amine-gold bond as indicated by the red arrow. c Exponential decay in cumulative binding step count as the system approaches saturation. In this regime, it is necessary to periodically inject more analyte in solution as scarce analytes are lost to external immobilisation (i.e. from undetected NRs that are not excited by the WGM). d Histograms depicting the resonance shift \({\mathrm{\Delta }}\lambda _{{\mathrm{Res}}}\) and linewidth shift \({\mathrm{\Delta }}\kappa\) for binding events, as well as their related event time separations \({\mathrm{\Delta }}t_1\) and \({\mathrm{\Delta }}t_2\). The \({\mathrm{\Delta }}\kappa\) distribution shows both positive and negative shifts, while \({\mathrm{\Delta }}t_1\) and \({\mathrm{\Delta }}t_2\) distributions are Poissonian.
An added convenience of choosing cysteamine is its comparable diffusion kinetics with respect to N-acetylcysteine (NAC)—a synthetic precursor of cysteine with acetyl protecting group in place of primary amine. We used NAC as a negative control and the response revealed a negligible rate of thiol-gold bond formation at high pH and high concentration (Fig. 3). A lack of step discontinuities within the trace supports amine-gold bonding in the basic buffer and therefore thiol-functionalisation of the gold NRs with cysteamine.
Fig. 3: Background and negative control measurement with NAC at micromolar concentration.
a Resonance and linewidth shift traces exhibiting transient signal above 3σ with rates on the order of 0.1 s−1 over several minutes; however, these persist in the presence and absence of NAC and TCEP in solution. No permanent binding patterns were found during peak tracking. b NAC molecule, with carboxyl, thiol, and (amine-attached) acetyl groups, near a detection site.
pH-dependent disulfide nanochemistry
The charge of single molecules diffusing to the optoplasmonic sensor can lead to a diverse set of reactions and LSP-WGM resonance perturbations. Dimerisation, for instance, is maximised when thiol groups are made nucleophilic through deprotonation at a pH above the \({\mathrm{pKa}}_{{\mathrm{SH}}}\). To circumvent electrostatic repulsion between primary amines, high aminothiol dimerisation and disulfide exchange rates demand a pH greater than the \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). We therefore investigated these effects by way of pH variation near the \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). After pre-loading the gold NRs on the glass WGM microcavity with cysteamine in Fig. 4a, we flushed the chamber volume and replaced the surrounding dielectric with sodium carbonate-bicarbonate buffer, 1 M sodium chloride, at pH 10.19 < \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). Figure 4b highlights signal activity upon addition of a racemic, subfemtomolar mixture of reduced d- and l-cysteine. Transient peaks in the linewidth appear in packets that dissipate (see Fig. 4c) as external capture removes available dl-cysteine. We attribute these peaks to thiolates that fail to form a disulfide bond (Fig. 4d). The Poisson-distributed events for t ≤ 2 min. have a mean rate = 0.01 aM−1s−1 that surpasses diffusion (i.e. DDL-Cys ~ 10−10 m2 s−1, kon ~ 1 nM−1s−1 45), implying molecular trapping near the gold NR hotspots. Charged molecules are, by analogy to atomic ions41, bounded by an electrostatic potential well whose depth is increased in proportion to ionic strength.
Fig. 4: Cysteamine pre-functionalisation and disulfide events from converging dl-cysteine.
a Binding of cysteamine to Au NRs via amine in basic buffer. b Linewidth fluctuations induced by racemic dl-cysteine interacting with immobilised cysteamine thiolates at \({\mathrm{pKa}}_{{\mathrm{SH}}}\) < pH < \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). TCEP reducing agent is employed here to counteract cysteine oxidation/dimerisation. c Linewidth shift \({\mathrm{\Delta }}\kappa\) and event time duration \({\mathrm{\Delta }}\tau\) histograms extracted from the resonance trace of (b). The mean event rate of the Poisson distributions passed through an inflection point, decreasing from 0.01 aM−1 s−1 to 0.003 aM−1 s−1 within an 8-min interval as the diffusing cysteines were captured. d dl-cysteine and bound cysteamine transiently interacting via their thiol groups.
For proof of principle, we increased the environmental pH to 11.09 and raised the analyte concentration. In this regime we expected sustained reversible disulfide reactions with defined signal states in the resonance trace. The neutral amines of the highly anionic cysteamine and l-cysteine indeed result in binding/unbinding state transitions as in Fig. 5a, with clear linewidth broadening and narrowing steps of roughly equal mean height. The stability of the disulfide reactions is attributed to an order of magnitude rise in hydroxide ion concentration past the \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\) and the event rate is maintained by electrostatic trapping. Since TCEP continually cleaves bound dimers during redox cycling, the monomer or dimer state of the leaving group can also be identified (cf. Supplementary Fig. 4). This trial was repeated in Fig. 5b for a larger molecule, 5,5′-dithiobis-(2-nitrobenzoic acid) (DTNB/Ellman's reagent), which readily underwent disulfide exchange with bound cysteamine linkers. In all cases, reducing agent concentration was adjusted until switching signals in the linewidth were observed. Resolvable dwell times and hence steady diffusion of reducing agent to the detection site were found at high molar excess > 1000.
Fig. 5: Cyclical binding/unbinding and exchange interactions with single mixed disulfides.
a Real-time linewidth step oscillations in the LSP-WGM resonance trace from redox reactions involving individual cysteamine-l-cysteine disulfides at pH > \({\mathrm{pKa}}_{{\mathrm{NH}}_2}\). These bridges are formed between cysteamine linkers and l-cysteine thiolates/disulfides (with neutral amines), then promptly cleaved by excess TCEP. b Linewidth patterns similar to a from individual cysteamine-TNB disulfides. Thiol-disulfide exchange may be triggered by DTNB dimers alone; however, cycling is ensured through reduction with TCEP. TNB has a benzene ring with carboxyl, thiol, and nitro groups. c Apparent resonant wavelength and linewidth signal steps, from thiol-disulfide exchange with DTNB and bound cysteamine, in a resolvable LSP-WGM doublet/split mode.
Some insight into oscillation patterns is provided by the mode split traces of Fig. 5c, the lineshapes for which are discernible when the coupling/scattering rate is larger than the cavity decay rate. The WGM eigenmode degeneracy is lifted here and the resonant wavelength traces for the high-energy and low-energy modes, respectively denoted as \(\lambda _ +\) and \(\lambda _ -\), disclose two separate binding events in time. Such divergence comes from perturbations of two distinct gold NRs lying at different spatial locations along the standing wave formed by counterpropagating WGMs. One NP is excited near a node of a constituent mode and the second lies near in an antinode, and then the situation inverts for the other constituent mode. Information in the split mode resonance wavelengths is encoded in the linewidth trace during single peak tracking; a shortcoming that, if corrected by available splitting, offers more robust molecular analysis by correlation to split mode properties and further detection site discrimination. Anomalous linewidth signatures of Fig. 5a, b that exclude resonance wavelength shifting are, however, only superficially explained via mode splitting. In order for the resonant wavelength to stay constant and relative mode splitting to be a contributing factor, either \({\mathrm{\Delta }}\lambda _ +\) = \(- {\mathrm{\Delta }}\lambda _ -\) or the transmission dip depth must oscillate—two features that we have not detected in our recorded split mode traces. For the former to hold true, any heterodyne beat note tied to frequency splitting would have to stably oscillate between two beat frequencies. It is instead conceivable that the combination of LSP-WGM resonance energy invariance and lifetime variance implicates a relationship between the LSP resonance and molecular vibrational modes46. A transition between bound vibrational states that are close in energy and reside in two continuums is possible. With shifts in the electronic resonance-dependent Raman cross-section upon chemical reaction and/or charge transfer, the Raman tensor and hence the optomechanical coupling rate may be decipherable. In this way the charge state of bound cysteamine linkers and their disulfide linkages can influence the optoplasmonic sensor response to grant molecular charge sensitivity47.
Experimental results were presented for single aminothiols binding to gold nanoantennae of an optoplasmonic sensor system at subfemtomolar concentrations. We leveraged these aminothiol linkers (i.e. cysteamine) by way of reaction of their amine groups with gold, followed by repeatable disulfide interactions between the linkers and diffusing thiolates/disulfides incorporating TCEP reducing agent as a counterbalancing reagent. The thiol-functionalisation of gold was reinforced by negative controls performed with thiolated molecules in an equivalent sensor configuration. Statistical analysis of signal patterns at 100's of attomolar concentration revealed finite single-molecule detection due to removal from external adsorption, ligands, or other forms of capture. This recent advance is in part guided by selection of low-complexity analytes and saturation of environmental conditions to suppress Debye screening. Signatures in the linewidth traces were championed throughout our measurements as they were shown to contain leaving group information imprinted onto LSP-WGM resonance perturbations.
Despite the existence of identifiable disulfide interactions from DTNB, d-cysteine, and l-cysteine, a comprehensive theory to describe the underlying optoplasmonic detection mechanism has yet to emerge. Nonetheless, the dwell times and statistical inferences of cyclical single-molecule interactions in this work remain critical in circumventing site heterogeneity and characterising surface-bound thiolates and disulfides. Reactions near the nanoantennea hotspots have demonstrably lower degrees of freedom via spatial constraints and redox cycling. We foresee future refinements to the temporal resolution by locking the laser frequency to the WGM resonance. Our disulfide quantification paradigm ultimately opens avenues for charge transfer observation, including direct implementation of all sensing channels towards pinpointing single molecules and unravelling their nanochemistry.
Sample and microsphere preparation
Chemicals were purchased from Sigma-Aldrich and Thermo Scientific. The principal solvent in which analytes were dissolved was ultrapure water delivered from a Merck Q-POD dispenser. Solutions without NRs were passed through a 0.2 µm Minisart syringe filter and dilutions were performed with Gilson P2L, P20L, and P1000L pipettes. Each microspherical cavity was reflowed from a Corning SMF-28 Ultra, single-mode telecommunications fibre by CO2 laser light absorption. Surface tension during heating yielded a circularly symmetric cavity structure with a smooth dielectric interface. Mechanical stabilisation of the suspended microcavity was provided by prior insertion into a Thorlabs CF126-10 ceramic ferrule, which was then secured to an aluminium holder fixed to a three-axis translation stage. The diffusion-limited sample volume of 300–500 µL was enclosed by a glass window, N-SF 11 prism face, and sandwiched polydimethylsiloxane (PDMS) basin.
Surface chemistry protocol
Once the cavity was submerged in aqueous solution and a coupling geometry was found via alignment, cetyltrimethylammonium bromide-coated gold NRs (diameter = 10 nm, length = 24 nm, and LSPR wavelength = 650 nm) from Nanopartz were deposited onto the microcavity surface. A desirable linewidth change \({\mathrm{\Delta }}\kappa\) accumulated during deposition was roughly 40–60 fm. Microsphere surface functionalisation and passivation are further detailed in Supplementary Methods 2. All aminothiol linkers were bound to the gold NRs in sodium carbonate-bicarbonate buffer at a pH above 10.75 with 1 M of sodium chloride ions. Additionally, washing steps were interspersed throughout each experiment to expel extraneous adsorbents.
Resonance tracking
In experiment, the whispering-gallery mode resonance extremum of our sensor is monitored using a bespoke centroid method41,48
$${\mathrm{First}}\,{\mathrm{Moment}} = \frac{{\mathop {\sum}\nolimits_{i = 1}^n {i[T_{{\mathrm{Threshold}}} - T(i)]} }}{{\mathop {\sum}\nolimits_{i = 1}^n {[T_{{\mathrm{Threshold}}} - T(i)]} }},$$
where \(T_{{\mathrm{Threshold}}}\) is the fixed transmission threshold and \(n\) is the number of points defined to be in the resonant mode. The external cavity laser is swept linearly across an ~8.5 pm wavelength range as driven by a triangular scan waveform, wherein hysteresis is averted by selective recording of the upscan. The transmission spectra are acquired with a sampling rate of 2.5 MHz and bit depth of 14. Given that laser diode emission intensity differs over the wavelength scan, 200 spectra are first averaged prior to coupling. Flattening of the spectrum is then executed and a fixed transmission threshold for peak detection is set. A resonance dip is only recognised if it falls below the transmission threshold and its width exceeds a successive point minimum. If these conditions are satisfied, the time trace of the computed lineshape position and width can be visualised in real time and stored for post-analysis. Many noise sources in the frequency domain are also put into account during our measurements, e.g. temperature drift (i.e. thermorefractivity and thermoelasticity), mechanical vibrations, laser mode hopping, and nanorod displacement.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Elson, E. L. Fluorescence correlation spectroscopy: past, present, future. Biophys. J. 101, 2855–2870 (2011).
ADS CAS Article Google Scholar
Lerner, E., Cordes, T., Ingargiola, A., Alhadid, Y., Chung, S., Michalet, X. & Weiss, S., Toward dynamic structural biology: Two decades of single-molecule Förster resonance energy transfer. Science 359, https://doi.org/10.1126/science.aan1133 (2018).
Hillmering, M., Pardon, G., Vastesson, A., Supekar, O., Carlborg, C. F., Brandner, B. D., van der Wijngaart, W. & Haraldsson, T. Off-stoichiometry improves the photostructuring of thiol–enes through diffusion-induced monomer depletion. Microsyst. Nanoeng. 2, https://doi.org/10.1038/micronano.2015.43 (2016).
McBride, M. K., Martinez, A. M., Cox, L., Alim, M., Childress, K., Beiswinger, M., Podgorski, M., Worrell, B. T., Killgore, J. & Bowman, C. N. A readily programmable, fully reversible shape-switching material. Sci. Adv. 4, https://doi.org/10.1126/sciadv.aat4634 (2018).
Pulcu, G. S., Mikhailova, E., Choi, L.-S. & Bayley, H. Continuous observation of the stochastic motion of an individual small-molecule walker. Nat. Nanotechnol. 10, 76–83 (2014).
ADS Article Google Scholar
Kassem, S., van Leeuwen, T., Lubbe, A. S., Wilson, M. R., Feringa, B. L. & Leigh, D. A. Artificial molecular motors. Chem. Soc. Rev. 46, 2592–2621 (2017).
Pensa, E., Cortés, E., Corthey, G., Carro, P., Vericat, C., Fonticelli, M. H., Benı́tez, G., Rubert, A. A. & Salvarezza, R. C. The chemistry of the sulfur–gold interface: in search of a unified model. Acc. Chem. Res. 45, 1183–1192 (2012).
Zhou, C., Duan X. & Liu, N. A plasmonic nanorod that walks on DNA origami. Nat. Commun. 6, https://doi.org/10.1038/ncomms9102 (2015).
Betz, S. F. Disulfide bonds and the stability of globular proteins. Protein Sci. 2, 1551–1558 (1993).
Carl, P., Kwok, C. H., Manderson, G., Speicher, D. W. & Discher, D. E. Forced unfolding modulated by disulfide bonds in the Ig domains of a cell adhesion molecule. Proc. Natl Acad. Sci. 98, 1565–1570 (2001).
Song, J., Yuan, Z., Tan, H., Huber, T. & Burrage, K. Predicting disulfide connectivity from protein sequence using multiple sequence feature vectors and secondary structure. Bioinformatics 23, 3147–3154 (2007).
Winterbourn, C. C. & Hampton, M. B. Thiol chemistry and specificity in redox signaling. Free Radic. Biol. Med. 45, 549–561 (2008).
Fu, X., Cate, S. A., Dominguez, M., Osborn, W., Özpolat, T., Konkle, B. A., Chen, J. & López, J. A. Cysteine Disulfides (Cys-ss-X) as Sensitive Plasma Biomarkers of Oxidative Stress. Sci. Rep. 9, https://doi.org/10.1038/s41598-018-35566-2 (2019).
Winther, J. R. & Thorpe, C. Quantification of thiols and disulfides. Biochim. Biophys. Acta, Gen. Subj. 1840, 838–846 (2014).
Rahman, I., Kode, A. & Biswas, S. K. Assay for quantitative determination of glutathione and glutathione disulfide levels using enzymatic recycling method. Nat. Protoc. 1, 3159–3165 (2006).
Kneipp, K., Wang, Y., Kneipp, H., Perelman, L. T., Itzkan, I., Dasari, R. R. & Feld, M. S. Single molecule detection using surface-enhanced raman scattering (SERS). Phys. Rev. Lett. 78, 1667–1670 (1997).
Nie, S. & Emory, S. R. Probing single molecules and single nanoparticles by surface-enhanced Raman scattering. Science 275, 1102–1106 (1997).
Zijlstra, P., Paulo, P. M. R. & Orrit, M. Optical detection of single non-absorbing molecules using the surface plasmon resonance of a gold nanorod. Nat. Nanotechnol. 7, 379–382 (2012).
Gross, L., Mohn, F., Moll, N., Liljeroth, P. & Meyer, G. The chemical structure of a molecule resolved by atomic force microscopy. Science 325, 1110–1114 (2009).
Hanay, M. S., Kelber, S., Naik, A. K., Chi, D., Hentz, S., Bullard, E. C., Colinet, E., Duraffourg, L. & Roukes, M. L. Single-protein nanomechanical mass spectrometry in real time. Nat. Nanotechnol. 7, 602–608 (2012).
Ndieyira, J. W., Kappeler, N., Logan, S., Cooper, M. A., Abell, C., McKendry, R. A. & Aeppli, G. Surface-stress sensors for rapid and ultrasensitive detection of active free drugs in human serum. Nat. Nanotechnol. 9, 225–232 (2014).
Xu, B. & Tao, N. J. Measurement of single-molecule resistance by repeated formation of molecular junctions. Science 301, 1221–1223 (2003).
Garaj, S., Hubbard, W., Reina, A., Kong, J., Branton, D. & Golovchenko, J. A. Graphene as a subnanometre trans-electrode membrane. Nature 467, 190–193 (2010).
Sorgenfrei, S., Chiu, C.-y, Gonzalez, R. L. Jr., Yu, Y.-J., Kim, P., Nuckolls, C. & Shepard, K. L. Label-free single-molecule detection of DNA-hybridization kinetics with a carbon nanotube field-effect transistor. Nat. Nanotechnol. 6, 126–132 (2011).
Cui, L., Hur, S., Akbar, Z. A., Klöckner, J. C., Jeong, W., Pauly, F., Jang, S.-Y., Reddy, P. & Meyhofer, E. Thermal conductance of single-molecule junctions. Nature 572, 628–633 (2019).
Baaske, M. D., Foreman, M. R. & Vollmer, F. Single-molecule nucleic acid interactions monitored on a label-free microcavity biosensor platform. Nat. Nanotechnol. 9, 933–939 (2014).
Foreman, M. R. & Vollmer, F. Theory of resonance shifts of whispering gallery modes by arbitrary plasmonic nanoparticles. New J. Phys. 15, https://doi.org/10.1088/1367-2630/15/8/083006 (2013).
Foreman, M. R. & Vollmer, F. Level repulsion in hybrid photonic-plasmonic microresonators for enhanced biodetection. Phys. Rev. A 88, https://doi.org/10.1103/PhysRevA.88.023831 (2013).
Klusmann, C., Suryadharma, R. N. S., Oppermann, J., Rockstuhl, C. & Kalt, H. Hybridizing whispering gallery modes and plasmonic resonances in a photonic metadevice for biosensing applications [Invited]. J. Opt. Soc. Am. B 34, D46–D55 (2017).
Zhu, J., Ozdemir, S. K., Xiao, Y.-F., Li, L., He, L., Chen, D.-R. & Yang, L. On-chip single nanoparticle detection and sizing by mode splitting in an ultrahigh-Q microresonator. Nat. Photonics 4, 46–49 (2009).
Shao, L., Jiang, X.-F., Yu, X.-C., Li, B.-B., Clements, W. R., Vollmer, F., Wang, W., Xiao, Y.-F. & Gong, Q. Detection of single nanoparticles and lentiviruses using microcavity resonance broadening. Adv. Mater. 25, 5616–5620 (2013).
Lu, T., Su, T.-T. J., Vahala, K. J. & Fraser, S. E. Split frequency sensing methods and systems. US Patent 8593638 (2013).
Kim, E., Baaske, M. D. & Vollmer, F. In situ observation of single-molecule surface reactions from low to high affinities. Adv. Mater. 28, 9941–9948 (2016).
Leff, D. V., Brandt, L. & Heath, J. R. Synthesis and characterization of hydrophobic, organically-soluble gold nanocrystals functionalized with primary amines. Langmuir 12, 4723–4730 (1996).
Pong, B.-K., Lee, J.-Y. & Trout, B. L. First principles computational study for understanding the interactions between ssDNA and gold nanoparticles: adsorption of methylamine on gold nanoparticulate surfaces. Langmuir 21, 11599–11603 (2005).
Venkataraman, L., Klare, J. E., Tam, I. W., Nuckolls, C., Hybertsen, M. S. & Steigerwald, M. L. Single-molecule circuits with well-defined molecular conductance. Nano Lett. 6, 458–462 (2006).
Kim, Y., Hellmuth, T. J., Bürkle, M., Pauly, F. & Scheer, E. Characteristics of amine-ended and thiol-ended alkane single-molecule junctions revealed by inelastic electron tunneling spectroscopy. ACS Nano 5, 4104–4111 (2011).
Xie, H.-J., Lei, Q.-F. & Fang, W.-J. Intermolecular interactions between gold clusters and selected amino acids cysteine and glycine: a DFT study. J. Mol. Model. 18, 645–652 (2011).
Heylman, K. D., Thakkar, N., Horak, E. H., Quillin, S. C., Cherqui, C., Knapper, K. A., Masiello, D. J. & Goldsmith, R. H. Optical microresonators as single-particle absorption spectrometers. Nat. Photonics 10, 788–795 (2016).
Nelson, J. W. & Creighton, T. E. Reactivity and ionization of the active site cysteine residues of DsbA, a protein required for disulfide bond formation in vivo. Biochemistry 33, 5974–5983 (1994).
Baaske, M. D. & Vollmer, F. Optical observation of single atomic ions interacting with plasmonic nanorods in aqueous solution. Nat. Photonics 10, 733–739 (2016).
O'Neil, M. J. The Merck Index, 15th edn (Royal Society of Chemistry, Cambridge, 2013).
Serjeant, E. P. & Dempsey, B. Ionisation Constants of Organic Acids in Aqueous Solution (Pergamon Press, Oxford/New York, 1979).
Arnold, S., Khoshsima, M., Teraoka, I., Holler, S. & Vollmer, F. Shift of whispering-gallery modes in microspheres by protein adsorption. Opt. Lett. 28, 272–274 (2003).
Jin, W. & Chen, H. A new method of determination of diffusion coefficients using capillary zone electrophoresis (peak-height method). Chromatographia 52, 17–21 (2000).
Roelli, P., Galland, C., Piro, N. & Kippenberg, T. J. Molecular cavity optomechanics as a theory of plasmon-enhanced Raman scattering. Nat. Nanotechnol. 11, 164–169 (2015).
Mauranyapin, N. P., Madsen, L. S., Taylor, M. A., Waleed, M. & Bowen, W. P. Evanescent single-molecule biosensing with quantum-limited precision. Nat. Photonics 11, 477–481 (2017).
Kukanskis, K., Elkind, J., Melendez, J., Murphy, T., Miller, G. & Garner, H. Detection of DNA Hybridization Using the TISPR-1 Surface Plasmon Resonance Biosensor. Anal. Biochem. 274, 7–17 (1999).
The authors acknowledge funding from the University of Exeter, the Engineering and Physical Sciences Research Council (Ref. EP/R031428/1), and from the European Research Council under an H2020-FET open grant (ULTRACHIRAL, ID: 737071). Spectral data was acquired and step signals were evaluated using LabVIEW software developed by M.D. Baaske.
Living Systems Institute, School of Physics, University of Exeter, Exeter, EX4 4QD, UK
Serge Vincent, Sivaraman Subramanian & Frank Vollmer
Serge Vincent
Sivaraman Subramanian
Frank Vollmer
S.V. designed and performed the experiments, completed the data analysis, and composed the manuscript. S.S. wrote the MATLAB application for transient signal analysis, while F.V. supervised the project and revised the manuscript. All authors discussed and interpreted the results.
Correspondence to Serge Vincent or Frank Vollmer.
The authors declare no competing interests.
Peer review information Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this manuscript. Peer review reports are available.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Vincent, S., Subramanian, S. & Vollmer, F. Optoplasmonic characterisation of reversible disulfide interactions at single thiol sites in the attomolar regime. Nat Commun 11, 2043 (2020). https://doi.org/10.1038/s41467-020-15822-8
Biosensors and Diagnostics for Fungal Detection
Khalil K. Hussain
, Dhara Malavia
, Elizabeth M. Johnson
, Jennifer Littlechild
, C. Peter Winlove
, Frank Vollmer
& Neil A. R. Gow
Journal of Fungi (2020)
Effective linewidth shifts in single-molecule detection using optical whispering gallery modes
, Serge Vincent
& Frank Vollmer
Applied Physics Letters (2020)
Opto-fluidic-plasmonic liquid-metal core microcavity
Qijing Lu
, Xiaogang Chen
, Xianlin Liu
, Junqiang Guo
, Shusen Xie
, Xiang Wu
, Chang-Ling Zou
& Chun-Hua Dong
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Editors' Highlights
Nature Communications ISSN 2041-1723 (online)
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing | CommonCrawl |
Simulation of Groundwater Variation Characteristics of Hancheon Watershed in Jeju Island using Integrated Hydrologic Modeling
Kim, Nam-Won;Na, Hanna;Chung, Il-Moon 515
https://doi.org/10.5322/JESI.2013.22.5.515 PDF KSCI
To investigate groundwater variation characteristics in the Hancheon watershed, Jeju Island, an integrated hydrologic component analysis was carried out. For this purpose, SWAT-MODFLOW which is an integrated surface-groundwater model was applied to the watershed for continuous watershed hydrologic analysis as well as groundwater modeling. First, ephemeral stream characteristics of Hancheon watershed can be clearly simulated which is unlikely to be shown by a general watershed hydrologic model. Second, the temporally varied groundwater recharge can be properly obtained from SWAT and then spatially distributed groundwater recharge can be made by MODFLOW. Finally, the groundwater level variation was simulated with distributed groundwater pumping data. Since accurate recharge as well as abstraction can be reflected into the groundwater modeling, more realistic hydrologic component analysis and groundwater modeling could be possible.
A Method of Simulating Ephemeral Stream Runoff Characteristics in Cheonmi-cheon Watershed, Jeju Island
Kim, Nam-Won;Chung, Il-Moon;Na, Hanna 523
In this study, a method of simulating ephemeral stream runoff characteristics in Jeju watershed is newly suggested. The process based conceptual-physical scheme is established based on the SWAT-K and applied to Cheonmi-cheon watershed which shows the typical pattern of ephemeral stream runoff characteristics. For the proper simulation of this runoff, the intermediate flow and baseflow are controlled to make downward percolation should be dominant. The result showed that surface runoff simulated by using the modified scheme showed good agreement with observed runoff data. In addition, it was found that the estimated runoff directly affected the groundwater recharge rate. This conceptual model should be continuously progressed including rainfall interception, spatially estimated evapotranspiration and so forth for the reasonable simulation of the hydrologic characteristics in Jeju island.
Assessment of Actual Evapotranspiration in the Hancheon Watershed, Jeju Island
Kim, Nam Won;Lee, Jeong Eun 533
In this study, estimation methods for actual evapotranspiration have been studied using the concept of potential and actual evapotranspiration. Among the diverse estimation methods, SWAT-K application is chosen for hydrological modeling. For Jeju island we have characterized annual and monthly evapotranspiration using SWAT-K. In the results, simulated potential evapotranspiration reached to the 91% of small pan evaporation. With respect to the temperature lapse rate($-6^{\circ}C/km$) depending on the altitude of Halla mountain, evapotranspiration rate decreased by 7.5% compared to the status when the temperature data from the Jeju weather station were applied to the watershed. As the average of annual rainfall increased, potential evapotranspiration was increased, actual evapotranspiration was, however, decreased.
Development of Topological Correction Algorithms for ADCP Multibeam Bathymetry Measurements
Kim, Dong-Su;Yang, Sung-Kee;Kim, Soo-Jeong;Jung, Woo-Yul 543
Acoustic Doppler Current Profilers (ADCPs) are increasingly popular in the river research and management communities being primarily used for estimation of stream flows. ADCPs capabilities, however, entail additional features that are not fully explored, such as morphological representation of river or reservoir bed based upon multi-beam depth measurements. In addition to flow velocity, ADCP measurements include river bathymetry information through the depth measurements acquired in individual 4 or 5 beams with a given oblique angle. Such sounding capability indicates that multi-beam ADCPs can be utilized as an efficient depth-sounder to be more capable than the conventional single-beam eco-sounders. The paper introduces the post-processing algorithms required to deal with raw ADCP bathymetry measurements including the following aspects: a) correcting the individual beam depths for tilt (pitch and roll); b) filtering outliers using SMART filters; d) transforming the corrected depths into geographical coordinates by UTM conversion; and, e) tag the beam detecting locations with the concurrent GPS information; f) spatial representation in a GIS package. The developed algorithms are applied for the ADCP bathymetric dataset acquired from Han-Cheon in Jeju Island to validate themselves applicability.
Characteristics of Runoff on Urban Watershed in Jeju island, Korea
Jung, Woo-Yul;Yang, Sung-Kee;Lee, Jun-Ho 555
Jeju Island, the heaviest raining area in Korea, is a volcanic Island located at the southernmost of Korea, but most streams are of the dry due to its hydrological/geological characteristics different from those of inland areas. Therefore, there are limitations in applying the results from the mainland to the studies on stream run-off characteristics analysis and water resource analysis of Jeju Island. In this study, the SWAT(soil & water assessment tool) model is used for the Hwabuk stream watershed located east of the downtown to calculate the long-term stream run-off rate, and WMS(watershed modeling system) and HEC-HMS(hydrologic modeling system) models are used to figure out the stream run-off characteristics due to short-term heavy rainfall. As the result of SWAT modelling for the long-term rainfall-runoff model for Hwabuk stream watershed in 2008, 5.66% of the average precipitation of the entire basin was run off, with 3.47% in 2009, 8.12% in 2010, and root mean square error(RMSE) and determination coefficient($R^2$) was 496.9 and 0.87, respectively, with model efficient(ME) of 0.72. From the results of WMS and HEC-HMS models which are short-term rainfall-runoff models, unless there was a preceding rainfall, the runoff occurred only for rainfall of 40mm or greater, and the run-off duration averaged 10~14 hours.
Estimation of Roughness Coefficient Using a Representative Grain Diameter for Han Stream in Jeju Island
Lee, Jun-Ho;Yang, Sung-Kee;Kim, Dong-Su 563
Roughness coefficient was computed for review of applicability based on measurement of the representative grain diameter reflecting channel characteristics of Han Stream. After field survey, collection of bed material, and grain analysis on the collected bed material, roughness coefficient was computed using representative grain and existing empirical equation for roughness coefficient. Value of roughness coefficient calculated using equation by Meyer-Peter and Muller (1948) was 0.0417 for upstream, 0.0432 for midstream, and 0.0493 for downstream. As a result of comparing the computed roughness coefficient to other empirical equations for review of applicability, the coefficient was larger in Strickler (1923) equation by 0.006. Smaller coefficient was shown by Planning Report for River Improvement Works. Equation by Garde and Raju (1978) was larger by 0.004, and equations by Lane and Carlson (1953) and by Meyer-Peter and Muller (1948) were larger by 0.001. Such precise roughness coefficient is extremely important when computing the amount of flood in rivers to prevent destruction of downstream embankments and property damages from flooding. Since roughness coefficient is a factor determined by complicated elements and differs according to time and space, continued management of roughness coefficient in rivers and streams is deemed necessary.
Stream Flow Analysis of Dry Stream on Flood Runoff in Islands
Yang, Won-Seok;Yang, Sung-Kee 571
In this study, compared with the result of water surface elevation and water velocity on the establishment of river maintenance basic plan and result of HEC-GeoRAS based GIS, and after use the result of water surface elevation and velocity were observed in the Han stream on Jeju island, analysis 2 dimensional stream flow. the lateral hydraulic characteristics and curved channel of the stream were analyzed by applying SMS-RMA2 a 2 dimensional model. The results of the analysis using HEC-RAS model and HEC-GeoRAS model indicated that the distribution ranges of water surface elevation and water velocity were similar, but the water surface elevation by section showed a difference of 0.7~2.18 EL.m and 0.63~1.16 EL.m respectively, and water velocity also showed differences of maximum 1.58m/sec and 2.67m/sec. SMS-RMA2 analysis was done with the sphere of Muifa the typhoon as a boundary condition, and as a result, water velocity distribution was found to be 1.19 through 3.91 m/sec, and the difference of lateral water velocity in No. 97 through 99 the curved channel of the stream was analyzed to be 1.59 through 2.36 m/sec. In conclusion it is anticipated that the flow analysis of 2 dimension model of stream can reflect the hydraulic characteristics of the stream curved channel or width and shape, and can be applied effectively in the establishment of river maintenance basic plan or management and designing of stream.
Flood Runoff Measurements using Surface Image Velocimetry
Kim, Yong-Seok;Yang, Sung-Kee;Yu, Kwon-Kyu;Kim, Dong-Su 581
Surface Image Velocimetry(SIV) is an instrument to measure water surface velocity by using image processing techniques. Since SIV is a non-contact type measurement method, it is very effective and useful to measure water surface velocity for steep mountainous streams, such as streams in Jeju island. In the present study, a surface imaging velocimetry system was used to calculate the flow rate for flood event due to a typhoon. At the same time, two types of electromagnetic surface velocimetries (electromagnetic surface current meter and Kalesto) were used to observe flow velocities and compare the accuracies of each instrument. The comparison showed that for velocity distributions root mean square error(RMSE) was 0.33 and R-squared was 0.72. For discharge measurements, root mean square error(RMSE) reached 6.04 and R-squared did 0.92. It means that surface image velocimetry could be used as an alternative method for electromagnetic surface velocimetries in measuring flood discharge.
Characteristics of Runoff on Southern Area of Jeju Island, Korea
Kang, Myung-Su;Yang, Sung-Kee;Jung, Woo-Yeol;Kim, Dong-Su 591
For Kangjeong stream and Akgeun stream in the central part of the southern Jeju Island, on-site discharge estimation was carried out for approximately 10 months (July 2011-April 2012) twice a month on a regular basis by using ADCP (acoustic doppler current profiler) and long term rate of discharge was calculated by using SWAT (soil and water assessment tool) model. The discharge was $0.28-1.30m^3/sec$ for Kangjeong stream and $0.10-1.54m^3/sec$ for Akgeun stream. It showed the maximum in the summer and the minimum in the winter. As a result of parameter sensitivity analysis of SWAT model, CN (NRCS runoff curve number for moisture condition II), SOL_AWC (available water capacity of the soil layer), and ESCO (soil evaporation compensation factor) showed sensitive responses. By using the result, the model was corrected and the rate of discharge was calculated. As a result, the annual discharge rate was 27.12-31.86(%) at the Akgeun basin and 23.55-28.43(%) at the Kangjeong basin.
Estimation of Soil Erosion and Sediment Yield in Mountainous Stream
Ko, Jae-Wook;Yang, Sung-Kee;Yang, Won-Seok;Jung, Woo-Yeol;Park, Cheol-Su 599
Jeju island, which is located along the moving path of typhoon, suffers from flooding and overflow by torrential rain. So abrupt runoff occurring, damages of downstream farm field and shore culturing farms are increasing. In this study, Oaedo stream, one of the mountainous streams on Jeju island, was selected as the basin of study subject and was classified into 3 sub-basins, and after the characteristics of subject basin, the soil erosion amount and the sediment delivery of the stream by land usage distribution were estimated with the use of SATEEC ArcView GIS, the sediment yield amount of 2000 and 2005 was analyzed comparatively. As a result of estimating the sediment yield amount of 2000, the three sub-basins were respectively 12,572.7, 14,080 and 157,761 tons/year. and sediment yield amounts were estimated as 35,172.9, 5,266 and 258,535 tons/year respectively in 2005. The soil erosion and sediment yield amount of 2005 using single storm rainfall were estimated high compared with 2000, but for sub-basin 2, the values rather decreased due to changes in land use, and the land coverage of 2005, since there are many classifications of land usage compared with 2000, enabling to reflect more accurate land usage condition, could deduce appropriate results. It is anticipated that such study results can be utilized as basic data to propose a direction to predict the amount of sediment yield that causes secondary flooding damage and deteriorates water quality within detention pond and grit chamber, and take action against damages in the downstream farm field and shore culturing farms.
Evaluation of Regional Characteristics Using Time-series Data of Groundwater Level in Jeju Island
Song, Sung-Ho;Choi, Kwang-Jun;Kim, Jin-Sung 609
Fluctuation patterns of groundwater level as a factor that reflects the characteristics of groundwater system can be categorized as the various types of aquifer with the time-series data. Time-series data on groundwater level obtained from 115 monitoring wells in Jeju Island were classified according to variation types, which were largely affected by rainfall(Dr), rainfall and pumping(Drp), and unknown cause(De). Analysis results indicate that 106 wells belong to Dr and Drp and the ratio of the wells with the wide range of fluctuation in the western and northern regions was higher than that in the eastern and southern regions. From the results that Drp is relatively higher than Dr in the western region which has the largest agricultural areas, groundwater level fluctuations may be affected significantly due to the intensive agricultural use. Non-parametric trend analysis results for 115 monitoring wells show that the increasing and decreasing trends as the ratio of groundwater levels were 14.8% and 22.6%, respectively, and groundwater levels revealed to be increased in the western, southern and northern regions excluding eastern region. Results of correlation analysis that cross-correlation coefficients and the time lags in the eastern and western regions are relatively high and short, respectively, indicate that the rainfall recharge effect in these regions is relatively larger due to the gentle slope of topography compared to that in the southern and northern regions.
Regional Drought Assessment Considering Climate Change and Relationship with Agricultural Water in Jeju Island
Song, Sung-Ho;Yoo, Seung-Hwan;Bae, Seung-Jong 625
Recently, the occurrences of droughts have been increased because of global warming and climate change. Water resources that mostly rely on groundwater are particularly vulnerable to the impact of precipitation variation, one of the major elements of climate change, are very sensitive to changes in the seasonal distribution as well as the average annual change in the viewpoint of agricultural activity. In this study, the status of drought for the present and future on Jeju Island which entirely rely on groundwater using SPI and PDSI were analyzed considering regional distribution of crops in terms of land use and fluctuation of water demand. The results showed that the precipitation distribution in Jeju Island is changed in intensity as well as seasonal variation of extreme events and the amount increase of precipitation during the dry season in the spring and fall indicated that agricultural water demand and supply policies would be considered by regional characteristics, especially the western region with largest market garden crops. Regarding the simulated future drought, the drought would be mitigated in the SPI method because of considering total rainfall only excluding intensity variation, while more intensified in the PDSI because it considers the evapotranspiration as well as rainfall as time passed. Moreover, the drought in the northern and western regions is getting worse than in the southern region so that the establishment of regional customized policies for water supply in Jeju Island is needed.
Estimation of Regional Agricultural Water Demand over the Jeju Island
Choi, Kwang-Jun;Song, Sung-Ho;Kim, Jin-Sung;Lim, Chan-Woo 639
Over 96.2% of the agricultural water in Jeju Island is obtained from groundwater and there are quite distinct characteristics of agricultural water demand/supply spatially because of regional and seasonal differences in cropping system and rainfall amount. Land use for cultivating crops is expected to decrease 7.4% (4,215 ha) in 2020 compared to 2010, while market garden including various vegetable crop types having high water demand is increasing over the Island, especially western area having lower rainfall amount compared to southern area. On the other hand, land use for fruit including citrus and mandarin having low water demand is widely distributed over southern and northern part having higher rainfall amount. The agricultural water demand of $1,214{\times}10^3\;m^3/day$ in 2020 is estimated about 1.39 times compared to groundwater supply capacity of $874{\times}10^3\;m^3/day$ in 2010 with 42.4% of eastern, 103.1% of western, 61.9% of southern, and 77.0% of northern region. Moreover, net secured amount of agricultural groundwater would be expected to be much smaller due to regional disparity of water demand/supply, the lack of linkage system between the agricultural water supply facilities, and high percentage of private wells. Therefore, it is necessary to ensure the total net secured amount of agricultural groundwater to overcome the expected regional discrepancy of water demand and supply by establishing policy alternative of regional water supply plan over the Island, including linkage system between wells, water tank enlargement, private wells maintenance and public wells development, and continuous enlargement of rainwater utilization facilities. | CommonCrawl |
\begin{document}
\title{Descendents for stable pairs on 3-folds} \author{R. Pandharipande
\\
{\em {\footnotesize{Dedicated to Simon Donaldson on the occasion of his $60^{th}$ birthday}}}}
\date{March 2017} \maketitle
\begin{abstract} We survey here the construction and the basic properties of descendent invariants in the theory of stable pairs on nonsingular projective 3-folds. The main topics covered are the rationality of the generating series, the functional equation, the Gromov-Witten/Pairs correspondence for descendents, the Virasoro constraints, and the connection to the virtual fundamental class of the stable pairs moduli space
in algebraic cobordism. In all of these directions, the proven results constitute only a small part of the conjectural framework. A central goal of the article is to introduce the open questions as simply and directly as possible. \end{abstract}
\maketitle
\setcounter{tocdepth}{1} \tableofcontents
\setcounter{section}{-1} \section{Introduction}
\subsection{Moduli space of stable pairs} Let $X$ be a nonsingular projective $3$-fold. The moduli of curves in $X$ can be approached in several different ways.{\footnote{For a discussion of the different approaches, see \cite{rp13}.}} For an algebraic geometer, perhaps the most straightforward is the Hilbert scheme of subcurves of $X$. The moduli of stable pairs is closely related to the Hilbert scheme, but is geometrically much more efficient. While the definition of a stable pair takes some time to understand, the advantages of the moduli theory more than justify the effort.
\begin{definition} \label{sp} A {\em{stable pair}} $(F,s)$ on $X$ is a coherent sheaf $F$ on $X$
and a section $s\in H^0(X,F)$ satisfying the following stability conditions: \begin{itemize} \item $F$ is \emph{pure} of dimension 1, \item the section $s:{\mathcal{O}}_X\to F$ has cokernel of dimensional 0. \end{itemize} \end{definition}
Let $C$ be the scheme-theoretic support of $F$. By the purity condition, all the irreducible components of $C$ are of dimension 1 (no 0-dimensional components are permitted). By \cite[Lemma 1.6]{pt}, the kernel of $s$ is the ideal sheaf of $C$, $$\II_C=\text{ker}(s) \subset {\mathcal{O}}_X\, ,$$ and $C$ has no embedded points. A stable pair $${\mathcal{O}}_X\to F$$ therefore defines
a Cohen-Macaulay subcurve $C\subset X$ via the kernel of $s$
and a 0-dimensional subscheme\footnote{When $C$ is Gorenstein (for instance if $C$ lies in a nonsingular surface), stable pairs supported on $C$ are in bijection with 0-dimensional subschemes of $C$. More precise scheme theoretic isomorphisms of moduli spaces are proved in \cite[Appendix B]{pt3}.} of $C$ via the support of the
cokernel of $s$.
To a stable pair, we associate the Euler characteristic and the class of the support $C$ of $F$, $$\chi(F)=n\in \mathbb{Z} \ \ \ \text{and} \ \ \ [C]=\beta\in H_2(X,\mathbb{Z})\,.$$ For fixed $n$ and $\beta$, there is a projective moduli space of stable pairs $P_n(X,\beta)$. Unless $\beta$ is an effective curve class, the moduli space $P_n(X,\beta)$ is empty.
A foundational treatment of the moduli space of stable pairs is presented in \cite{pt} via the results of Le Potier \cite{LePot}. Just as the Hilbert scheme $I_n(X,\beta)$ of subcurves of $X$ of Euler characteristic $n$ and class $\beta$ is a fine moduli space with a universal quotient sequence, $P_n(X,\beta)$ is a fine moduli space with a universal stable pair \cite[Section 2.3]{pt}. While the Hilbert scheme $I_n(X,\beta)$ is a moduli space of curves with free and embedded points, the moduli space of stable pairs $P_n(X,\beta)$
should be viewed as a moduli space of curves with points \emph{on the curve} determined by the cokernel of $s$. Though the additional points still play a role, $P_n(X,\beta)$ much smaller than $I_n(X,\beta)$.
If $P_n(X,\beta)$ is non-empty, then $P_{m}(X,\beta)$ is non-empty for all $m>n$. Stable pairs with higher Euler characteristic can be obtained by suitably twisting stable pairs with lower Euler characteristic (in other words, by {\em adding points}). On the other hand, for a fixed class $\beta\in H_2(X,\mathbb{Z})$, the moduli space $P_n(X,\beta)$ is empty for all sufficiently negative $n$. The proof exactly parallels the same result for the Hilbert scheme of curves $I_n(X,\beta)$.
\subsection{Action of the descendents}\label{actact} Denote the universal stable pair over $X\times P_{n}(X,\beta)$ by $${\mathcal{O}}_{X\times P_n(X,\beta)} \stackrel{s\ }{\rightarrow} \FF .$$ For a stable pair $(F,s)\in P_{n}(X,\beta)$, the restriction of the universal stable pair to the fiber
$$X \times (F,s) \ \subset\ X\times P_{n}(X,\beta) $$ is canonically isomorphic to ${\mathcal{O}}_X\stackrel{s\ }{\rightarrow} F$. Let $$\pi_X\colon X\times P_{n}(X,\beta)\to X,$$ $$\pi_P\colon X\times P_{n}(X,\beta) \to P_{n}(X,\beta)$$
be the projections onto the first and second factors. Since $X$ is nonsingular and $\FF$ is $\pi_P$-flat, $\FF$ has a finite resolution by locally free sheaves.{\footnote{Both $X$ and $P_n(X,\beta)$ carry ample line bundles.}} Hence, the Chern character of the universal sheaf $\FF$ on $X \times P_n(X,\beta)$ is well-defined.
\begin{definition}\label{dact} For each cohomology{\footnote{All homology and cohomology groups will be taken with $\mathbb{Q}$-coefficients unless explicitly denoted otherwise.}} class $\gamma\in H^*(X)$ and integer $i\in \mathbb{Z}_{\geq 0}$, the action of the {\em descendent} $\tau_i(\gamma)$ is defined by $$ \tau_i(\gamma)=\pi_{P*}\big(\pi_X^*(\gamma)\cdot \text{\em ch}_{2+i}(\FF) \cap \pi_P^*(\ \cdot\ )\big)\, .$$ \end{definition} \noindent The pull-back $\pi^*_P$ is well-defined in homology since $\pi_p$ is flat \cite{Ast}.
We may view the descendent action as defining a cohomology class $$\tau_i(\gamma)\in H^*(P_n(X,\beta))$$ or as defining an endomorphism $$\tau_i(\gamma): H_*(P_{n}(X,\beta))\to H_*(P_{n}(X,\beta))\, . $$ Definition \ref{dact} is the standard method of obtaining classes on moduli spaces of sheaves via universal structures. The construction has been used previously for the cohomology of the moduli space of bundles on a curve \cite{New}, for the cycle theory of the Hilbert schemes of points of a surface \cite{ES}, and in Donaldson's famous $\mu$ map for gauge theory on 4-manifolds \cite{DonK}.
\subsection{Tautological classes} \label{actactt} Let $\mathbb{D}$ denote the polynomial $\mathbb{Q}$-algebra on the symbols
$$\{ \, \tau_i(\gamma)\, |\, i\in \mathbb{Z}_{\geq 0} \text{ and } \gamma\in H^*(X)\, \}$$ subject to the basic linear relations \begin{eqnarray*} \tau_i(\lambda\cdot \gamma) & = & \lambda \tau_i(\gamma)\, ,\\ \tau_i(\gamma+ \widehat{\gamma}) &=& \tau_i(\gamma)+ \tau_i(\widehat{\gamma})\, , \end{eqnarray*} for $\lambda \in \mathbb{Q}$ and $\gamma, \widehat{\gamma}\in H^*(X)$. The descendent action defines a $\mathbb{Q}$-algebra homomorphism $$\alpha^X_{n,\beta}: \mathbb{D} \rightarrow H^*(P_n(X,\beta))\, .$$ The most basic questions about the descendent action are to determine $$\text{Ker}(\alpha^X_{n,\beta}) \subset \mathbb{D} \ \ \text { and } \ \ \text{Im}(\alpha^X_{n,\beta}) \subset H^*(P_n(X,\beta))\, .$$ Both questions are rather difficult since the space $P_n(X,\beta)$ can be very complicated (with serious singularities and components of different dimensions). Few methods are available to study $H^*(P_n(X,\beta))$.
Following the study of the cohomology of the moduli of stable curves, we define, for the moduli space of stable pairs $P_n(X,\beta)$, \begin{enumerate} \item[$\bullet$]$ \text{Im}(\alpha^X_{n,\beta})\subset H^*(P_n(X,\beta))$ to be the algebra of {\em tautological classes}, \item[$\bullet$] $\text{Ker}(\alpha^X_{n,\beta})\subset \mathbb{D}$ to be the the ideal of {\em tautological relations}. \end{enumerate} The basic expectation is that natural constructions yield tautological classes. For the moduli spaces of curves there is a long history of the study of tautological classes, geometric constructions, and relations, see \cite{FaPa, PaSLC} for surveys.
As a simple example, consider the tautological classes in the case $$X={\mathsf{P}}^3\, , \ \ \ n=1\, , \ \ \ \beta=\mathsf{L}\, ,$$ where $\mathsf{L}\in H_2(\mathsf{P}^3,\mathbb{Z})$ is the class of a line. The moduli space $P_1({\mathsf{P}}^3,\mathsf{L})$ is isomorphic to the Grassmannian $\mathbb{G}(2,4)$. The ring homomorphism $$\alpha_{1,\mathsf{L}}^{{\mathsf{P}}^3}: \mathbb{D} \rightarrow H^*(P_1({\mathsf{P}}^3, \mathsf{L}))$$ is surjective, so {\em all} classes are tautological. The tautological relations
$$\text{Ker}(\alpha_{1,\mathsf{L}}^{{\mathsf{P}}^3})\subset \mathbb{D}$$ can be determined by the Schubert calculus.
Our study of descendents here follows a different line which is more accessible than the full analysis of $\alpha_{n,\beta}^X$. The moduli space $P_n(X,\beta)$ carries a virtual fundamental class $$[P_n(X,\beta)]^{vir} \in H_*(P_n(X,\beta))$$ obtained from the deformation theory of stable pairs. There is an associated integration map \begin{equation}\label{fred} \int_{[P_n(X,\beta)]^{vir}} :\ \mathbb{D} \rightarrow \mathbb{Q} \end{equation} defined by $$\int_{[P_n(X,\beta)]^{vir}} \mathsf{D} = \int_{P_n(X,\beta)} \alpha_{n,\beta}^X (\mathsf{D}) \cap [P_n(X,\beta)]^{vir}\, $$ for $\mathsf{D}\in \mathbb{D}$. Here, $$\int_{P_n(X,\beta)}:\ H_*(P_n(X,\beta)) \rightarrow \mathbb{Q}$$ is the canonical point counting map factoring through $H_0(P_n(X,\beta))$. The standard theory of descendents is a study of the integration map \eqref{fred}.
\subsection{Deformation theory} To define a virtual fundamental class \cite{BehFan,LiTian}, a 2-term deformation/obstruction theory must be found on the moduli space of stable pairs $P_n(X,\beta)$. As in the case of the Hilbert scheme $I_n(X,\beta)$, the most immediate
obstruction theory of $P_n(X,\beta)$ does \textit{not} admit such a structure. For $I_n(X,\beta)$, a suitable obstruction theory is obtained by viewing $C\subset X$ {\em not} as a subscheme, but rather as an ideal sheaf $\II_C$ with trivial determinant \cite{DT,THC}. For $P_n(X,\beta)$, a suitable obstruction theory is obtained by viewing a stable pair {\em not} as sheaf with a section, but as an object $$[{\mathcal{O}}_X\rightarrow F]\in D^b(X)$$ in the bounded derived category of coherent sheaves on $X$.
Denote the quasi-isomorphism equivalence class of the complex $[{\mathcal{O}}_X\rightarrow F]$ in $D^b(X)$ by $I\udot$.
The quasi-isomorphism class $I\udot$ determines{\footnote{The claims require the dimension of $X$ to be 3.}} the stable pair \cite[Proposition 1.21]{pt}, and the fixed-determinant deformations of $I\udot$ in $D^b(X)$ match those of the pair $(F,s)$ to all orders \cite[Theorem 2.7]{pt}. The latter property shows the scheme $P_n(X,\beta)$ may be viewed as a moduli space of objects in the derived category.{\footnote{The moduli of objects in the derived category usually yields Artin stacks. The space $P_n(X,\beta)$ is a rare example where a component of the moduli of objects in the derived category is a scheme (uniformly for all $3$-folds $X$).}} We can then use the obstruction theory of the complex $I\udot$ rather than the obstruction theory of sheaves with sections.
The deformation/obstruction theory for complexes at $[I\udot]\in P_n(X,\beta)$ is governed by \begin{equation}\label{exts2} \Ext^1(I\udot,I\udot)_0 \quad\mathrm{and}\quad \Ext^2(I\udot,I\udot)_0\,. \end{equation} The obstruction theory \eqref{exts2} has all the formal properties of the Hilbert scheme case: 2 terms, a virtual class of (complex) dimension $d_\beta=\int_\beta c_1(X)$, $$[P_{n}(X,\beta)]^{vir} \in H_{2d_\beta} \big(P_n(X,\beta),\mathbb{Z}\big)\, ,$$ and a description via the $\chi^B$-weighted Euler characteristics in the Calabi-Yau case \cite{Kai}.
\subsection{Descendent invariants}\label{dess} Let $X$ be a nonsingular projective 3-fold. For nonzero $\beta\in H_2(X,\mathbb{Z})$ and arbitrary $\gamma_i\in H^*(X)$, define the stable pairs invariant with descendent insertions by \begin{equation}\label{ddd} \Big\langle \tau_{k_1}(\gamma_1)\ldots \tau_{k_r}(\gamma_r) \Big\rangle_{\!n,\beta}^X = \int_{[P_{n}(X,\beta)]^{vir}} \prod_{i=1}^r \tau_{k_i}(\gamma_i)\ . \end{equation} The partition function is \begin{equation}\label{ppp}
\ZZ_{\mathsf{P}}\Big(X;q\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_{i}) \Big)_\beta =\sum_{n\in \mathbb{Z}} \Big\langle \prod_{i=1}^r \tau_{k_i}(\gamma_{i}) \Big\rangle_{\!n,\beta}^X q^n. \end{equation} Since $P_n(X,\beta)$ is empty for sufficiently negative $n$, the partition function is a Laurent series in $q$,
$$\mathsf{Z}_{\mathsf{P}}\Big(X;q\ \Big|\ \prod_{i=1}^r \tau_{k_i}(\gamma_{i})\Big)_\beta \in \mathbb{Q}((q))\, .$$
The descendent invariants \eqref{ddd} and the associated partition functions \eqref{ppp} are the central topics of the paper. From the point of view of the complete tautological ring of descendent classes on $P_n(X,\beta)$, the descendent invariants \eqref{ddd} constitute only small part of the full data. However, among many advantages, the integrals \eqref{ddd} are deformation invariant as $X$ varies in families. The same can not be said of the tautological ring nor of the full cohomology $H^*(P_n(X,\beta))$.
In addition to carrying data about the tautological classes on $P_n(X,\beta)$, the descendent series are related to the enumerative geometry of curves in $X$. The connection is clearest for the primary fields $\tau_0(\gamma)$ which correspond to incidence conditions for the support curve of the stable pair with a fixed cycle $$V_\gamma \subset X$$ of class $\gamma \in H^*(X)$. But even for primary fields, the partition function
$$\mathsf{Z}_{\mathsf{P}}\Big(X;q\ \Big|\ \prod_{i=1}^r \tau_{0}(\gamma_{i})\Big)_\beta $$ provides a virtual count and is rarely strictly enumerative.
Descendents $\tau_k(D)$, for $k\geq 0$ and $D\subset X$ a divisor, can be viewed as imposing tangency conditions of the support curve of the stable pair along the divisor $D$. The connection of $\tau_k(D)$ to tangency conditions is not as close as the enumerative interpretation of primary fields --- the tangency condition is just the leading term in the understanding of $\tau_k(D)$. The topic will be discussed in Section \ref{ggogg}.
\subsection{Plan of the paper} The paper starts in Section \ref{111} with a discussion of the rationality of the descendent partition function in absolute, equivariant, and relative geometries. While the general statement is conjectural, rationality in toric and hypersurface geometries has been proven in joint work with A. Pixton in \cite{part1,PP2,PPQ}. Examples of exact calculations of descendents are given in Section \ref{fex}. A precise conjecture for a functional equation related to the change of variable $$q \mapsto \frac{1}{q}$$ is presented in Section \ref{funk}, and a conjecture constraining the poles appears in Section \ref{polec}.
The second topic, the Gromov-Witten/Pairs correspondence for descendents, is discussed in Section \ref{222r}. The descendent theory of stable maps and stable pairs on a nonsingular projective 3-fold $X$ are conjectured to be {\em equivalent} via a universal transformation. While the correspondence is proven in joint work with A. Pixton in toric \cite{PPDC} and hypersurface \cite{PPQ} cases and several formal properties are established, a closed formula for the transformation is not known.
The Gromov-Witten/Pairs correspondence has motivated much of the development of the descendent theory on the sheaf side. The first such conjectures for descendent series
were made in joint work with D. Maulik, A. Okounkov, and N. Nekrasov \cite{MNOP1,MNOP2} in the context of the Gromov-Witten/Donaldson-Thomas correspondence for the partition functions associated to the Hilbert schemes $I_n(X,\beta)$ of subcurves of $X$.
Given the Gromov-Witten/Pairs correspondence and the well-known Virasoro constraints for descendents in Gromov-Witten theory, there must be corresponding Virasoro constraints for the descendent theory of stable pairs. For the Hilbert schemes $I_n(X,\beta)$ of curves, descendent constraints were studied by A. Oblomkov, A. Okounkov, and myself in Princeton a decade ago \cite{oop}. In Section \ref{333r}, conjectural descendent constraints for the stable pairs theory of ${\mathsf{P}}^3$ are presented (joint work with A. Oblomkov and A. Okounkov).
The moduli space of stable pairs $P_n(X,\beta)$ has a virtual fundamental class in homology $H_*(P_n(X,\beta))$. By construction, the class lifts to algebraic cycles $A_*(P_n(X,\beta))$. In a recent paper, Junliang Shen has lifted the virtual fundamental class further to algebraic cobordism $\Omega_*(P_n(X,\beta))$. Shen's results open a new area of exploration with beautiful structure. At the moment, the methods available to explore the virtual fundamental class in cobordism all use the theory of descendents (since the Chern classes of the virtual tangent bundle of $P_n(X,\beta)$ are {\em tautological}). Shen's work is presented in Section \ref{4448}.
\subsection{Acknowledgments} Discussions with J. Bryan, S. Katz, D. Maulik, G. Oberdieck, A. Oblomkov, A. Okounkov, A. Pixton, J. Shen, R. Thomas, Y. Toda, and Q. Yin about stable pairs and descendent invariants have played an important role in my view of the subject. I was partially supported by
SNF grant 200021-143274, ERC grant AdG-320368-MCSK, SwissMAP, and the Einstein Stiftung.
The perspective of the paper is based in part on my talk {\em Why descendents?} at the Newton institute in Cambridge in the spring of 2011, though much of the progress discussed here has happened since then.
\section{Rationality} \label{111r} \subsection{Overview} Let $X$ be a nonsingular projective 3-fold. Our goal here is to present the conjectures governing the {\em rationality} of the partition functions of descendent invariants for the stable pairs theory of $X$. The most straightforward statements are for the absolute theory, but we will present the rationality claims for the equivariant and relative stable pairs theories as well. The latter two appear naturally when studying the absolute theory: most results to date involve equivariant and relative techniques. In addition to rationality, we will also discuss the {\em functional equation} and the {\em pole constraints} for the descendent partition functions.
While rationality has been established in many cases, new ideas are required to prove the conjectures in full generality. The subject intertwines the Chern characters of the universal sheaves with the geometry of the virtual fundamental class. Perhaps, in the future, a point of view will emerge from which rationality is obvious. Hopefully, the
functional equation will then also be clear. At present, the geometries for which the functional equation has been proven are rather few.
\subsection{Absolute theory} Let $X$ be a nonsingular projective 3-fold. The stable pairs theory for $X$ as presented in the introduction is the {\em absolute} case. Let $\beta\in H_2(X,\mathbb{Z})$ be a nonzero class, and let $\gamma_i\in H^*(X)$. The following conjecture{\footnote{A weaker conjecture for descendent partition functions for the Hilbert scheme $I_n(X,\beta)$ was proposed earlier in \cite{MNOP2}.}} was proposed{\footnote{Theorems and Conjectures are dated in the text by the year of the arXiv posting. The published dates are later and can be found in the bibliography.}} in \cite{pt2}.
\begin{Conjecture}[P.-Thomas, 2007] \label{111} For $X$ a nonsingular projective $3$-fold, the descendent partition function
$$\ZZ_{\mathsf{P}}\big(X;q\ | \prod_{i=1}^r \tau_{k_i}(\gamma_{i}) \big)_\beta$$ is the Laurent expansion in $q$ of a rational function in $\mathbb{Q}(q)$. \end{Conjecture}
In the absolute case, the descendent series satisfies a dimension constraint. For $\gamma_i\in H^{e_i}(X)$, the (complex) degree of the insertion $\tau_{k_i}(\gamma_i)$ is $\frac{e_i}{2}+k_i-1$. If the sum of the degrees of the descendent insertions does
not equal the virtual dimension, $$\text{dim}_{\mathbb C}\, [ P_n(X,\beta)]^{vir} = \int_\beta c_1(X)\,, $$
the partition function $\ZZ_{\mathsf{P}}\big(X;q\ | \prod_{i=1}^r \tau_{k_i}(\gamma_{i}) \big)_\beta$ vanishes.
In case $X$ is a nonsingular projective Calabi-Yau 3-fold, the virtual dimension of $P_n(X,\beta)$ is always 0. The rationality of the basic partition function
$$\ZZ_{\mathsf{P}}\big(X;q\ |\, \mathsf{1} \big)_\beta$$ was proven{\footnote{See \cite{pt3} for a similar rationality argument in a restricted (simpler) setting.}} in \cite{Bridge,Toda} by Serre duality, wall-crossing, and a weighted Euler characteristic approach to the virtual class \cite{Kai}. At the moment, the proof for Calabi-Yau 3-folds does not appear to suggest an approach in the general case.
\subsection{Equivariant theory} Let $X$ be a nonsingular quasi-projective toric 3-fold equipped with an action of the 3-dimensional torus $${\mathbf{T}}= {\mathbb C}^* \times {\mathbb C}^* \times {\mathbb C}^*\, .$$ The stable pairs descendent invariants can be lifted to equivariant cohomology (and defined by residues in the open case). For equivariant classes $\gamma_i \in H^*_{{\mathbf{T}}}(X)$, the descendent partition function is a Laurent series in $q$,
$$\mathsf{Z}_{\mathsf{P}}\Big(X;q\ \Big|\ \prod_{i=1}^r \tau_{k_i}(\gamma_{i})\Big)^{\mathbf{T}}_\beta \in \mathbb{Q}(s_1,s_2,s_3)((q))\, ,$$ with coefficients in the field of fractions of $$H^*_{\mathbf{T}}(\bullet)=\mathbb{Q}[s_1,s_2,s_3]\,.$$ The stable pair theory for such toric $X$ is the {\em equivariant} case. A central result of \cite{part1,PP2} is the following rationality property.
\begin{Theorem}[P.-Pixton, 2012] \label{pp12} For $X$ a nonsingular quasi-projective toric 3-fold,
the descendent partition function
$$\ZZ_{P}\Big(X;q\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)^{\mathbf{T}}_\beta$$ is the Laurent expansion in $q$ of a rational function in $\mathbb{Q}(q,s_1,s_2,s_3)$. \end{Theorem}
The proof of Theorem \ref{pp12} uses the virtual localization formula of \cite{GraberP}, the capped vertex{\footnote{A basic tool in the proof is the capped {\em descendent} vertex. The 1-leg capped descendent vertex is proven to be rational in \cite{part1}. The 2-leg and 3-leg capped descendent vertices are proven to be rational in \cite{PP2}.}} perspective of \cite{moop}, the quantum cohomology of the Hilbert scheme of points of resolutions of $A_r$-singularities \cite{mo1,hilb1}, and a delicate argument for pole cancellation at the vertex \cite{part1}. In the toric case, calculations can be made effectively, but the computational methods are not very efficient.
When $X$ is a nonsingular projective toric $3$-fold, Theorem \ref{pp12} implies Conjecture \ref{111} for $X$ by taking the non-equivariant limit. However, Theorem \ref{pp12} is much stronger in the toric case than Conjecture \ref{111} since the descendent insertions may exceed the virtual dimension in equivariant cohomology.
In addition to the Calabi-Yau and toric cases, Conjecture \ref{111} has been proven in \cite{PPQ} for complete intersections in products of projective spaces (for descendents of cohomology classes $\gamma_i$ restricted from the ambient space --- the precise statement is presented in Section \ref{compint}). Taken together, the evidence for Conjecture \ref{111} is rather compelling.
\subsection{First examples}\label{fex} Let $X$ be a nonsingular projective Calabi-Yau 3-fold, and let $$C \subset X\, $$ be a rigid nonsingular {\em rational} curve. Let
$$\ZZ_{\mathsf{P}}\big(C\subset X;q\ |\, \mathsf{1} \big)_{d[C]}$$ be the contribution to the partition function
$\ZZ_{\mathsf{P}}\big(X;q\ | \, \mathsf{1} \big)_{d[C]}$ obtained from the moduli of stable pairs {\em supported on} $C$. A localization calculation which goes back to the Gromov-Witten evaluation of \cite{FP} yields \begin{equation}\label{jqq}
\ZZ_{\mathsf{P}}\big(C\subset X;q\ | 1 \big)_{d[C]}=\sum_{\mu\vdash d} \frac{(-1)^{\ell(\mu)}}{{\mathfrak{z}}(\mu)} \prod_{i=1}^{\ell(\mu)} \frac{(-q)^{m_i}} {(1-(-q)^{m_i})^2}\, . \end{equation} The sum here is over all (unordered) partitions of $d$, $$\mu=(m_1,\ldots,m_{\ell(\mu)})\, , \ \ \ \sum_{i=1}^{\ell(\mu)}m_i = d\, ,$$ and ${\mathfrak{z}}(\mu)$ is the standard combinatorial factor
$${\mathfrak{z}}(\mu)= \prod_{i=1}^{\ell(\mu)} m_i \cdot |\text{Aut}(\mu)|\, .$$ The evaluation \eqref{jqq} played an important role in the discovery of the Gromov-Witten/Donaldson-Thomas correspondence in \cite{MNOP1}.
In example \eqref{jqq}, only the trivial descendent insertion $\mathsf{1}$ appears. For non-trivial insertions, consider the case where $X$ is ${\mathsf{P}}^3$. Let $$\mathsf{p},\mathsf{L}\in H_*({\mathsf{P}}^3)$$ be the point and line classes in ${\mathsf{P}}^3$ respectively. Geometrically, there is unique line through two points of ${\mathsf{P}}^3$. The corresponding partition function is also simple, \begin{equation}\label{ggtt}
\ZZ_{\mathsf{P}}\big({\mathsf{P}}^3;q\ |\, \tau_0(\mathsf{p})\tau_0(\mathsf{p}) \big )_{\mathsf{L}} = q+2q^2+q^3\, . \end{equation} The resulting series is not only rational, but in fact polynomial. For curve class $\mathsf{L}$, the descendent invariants in \eqref{ggtt} vanish for Euler characteristic greater than 3.
In example \eqref{ggtt}, only primary fields (with descendent subscript 0) appear.
An example with higher descendents is
$$\ZZ_{\mathsf{P}}\big({\mathsf{P}}^3;q\ |\, \tau_2(\mathsf{p}) \big )_{\mathsf{L}} = \frac{1}{12}q-\frac{5}{6}q^2+\frac{1}{12}q^3\, .$$ The fractions here come from the Chern character. Again, the result is a cubic polynomial. More interesting is the partition function \begin{equation}\label{s555}
\ZZ_{\mathsf{P}}\big({\mathsf{P}}^3;q\ |\, \tau_5(\mathsf{1}) \big )_{\mathsf{L}} = \frac{-2q-q^2+31q^3-31q^4+q^5+2q^6}{18(1+q)^3}\, . \end{equation}
The partition functions considered so far are all in the absolute case. For an equivariant descendent series, consider the ${\mathbf{T}}$-action on ${\mathsf{P}}^3$ defined by representation weights $\lambda_0,\lambda_1,\lambda_2,\lambda_3$ on the vector space ${\mathbb C}^4$. Let $$\mathsf{p_0}\in H^4_{\mathbf{T}}({\mathsf{P}}^3)$$ be the class of the ${\mathbf{T}}$-fixed point corresponding to the weight $\lambda_0$ subspace of ${\mathbb C}^4$. Then,
$$\ZZ_{\mathsf{P}}\big({\mathsf{P}}^3;q\ | \tau_3(\mathsf{p}_0) \big )_{\mathsf{L}} = \frac{\mathsf{A} q-\mathsf{B} q^2+\mathsf{B}q^3-\mathsf{A}q^4}{(1+q)} \, $$ where $A,B\in H^2_{\mathbf{T}}(\bullet)$ are given by \begin{eqnarray*} \mathsf{A}& =& \frac{1}{8}\lambda_0 - \frac{1}{24}(\lambda_1+\lambda_2+\lambda_3)\, , \\ \mathsf{B} & = & \frac{9}{8}\lambda_0 -\frac{3}{8} (\lambda_1+\lambda_2+\lambda_3)\, . \end{eqnarray*} The descendent insertion here has dimension 5 which exceeds the virtual dimension 4 of the moduli space of stable pair, so the invariants lie in $H^2_{\mathbf{T}}(\bullet)$. The obvious symmetry in all of these descendent series is explained by the conjectural function equation (discussed in Section \ref{funk}).
All of the formulas discussed above are calculated by the virtual localization formula \cite{GraberP} for stable pairs. The ${\mathbf{T}}$-fixed points, virtual tangent weights, and virtual normal weights are described in detail in \cite{pt2}.
\subsection{Example in degree 2} \label{dd22} A further example in the absolute case is the degree 2 series
{\footnotesize{$\mathsf{Z}_{\mathsf{P}}\big(\mathsf{P}^3; q \ |\, \tau_9(1)\big)_{2\mathsf{L}}$}}. While a rigorous answer could be obtained, the available computer calculation here outputs a conjecture,{\footnote{The answer relies on an old program for the theory of ideal sheaves written by A. Okounkov and a newer DT/PT descendent correspondence \cite{oop}.}}
{\footnotesize{
$$\hspace{-280pt}\mathsf{Z}_{\mathsf{P}}\big(\mathsf{P}^3; q \ |\, \tau_9(1)\big)_{2\mathsf{L}} =$$}} {\tiny{
$$ -\frac{(73 q^{12} - 825 q^{11} - 124 q^{10} + 5945 q^{9} + 779 q^{8} - 36020 q^{7} + 60224 q^{6} - 36020 q^{5} + 779 q^{4} + 5945 q^{3} - 124 q^2 - 825 q + 73) q} {60480 (1 + q)^3 (-1 + q)^3 }\, .$$}}
\noindent The computer calculations of Section \ref{fex} all provide rigorous results and could be improved to handle higher degree curves, but the code has not yet been written.
\subsection{Relative theory}\label{relth} Let $X$ be a nonsingular projective $3$-fold containing a nonsingular divisor $$D\subset X\, .$$ The {\em relative} case concerns the geometry $X/D$.
The moduli space $P_{n}(X/D,\beta)$ parameterizes stable relative pairs \begin{equation}\label{vyq} s:{\mathcal{O}}_{X[k]} \rightarrow F \end{equation} on the $k$-step{\footnote{We follow the terminology of \cite{L, LW}.}} degeneration $X[k]$.
\noindent $\bullet$ The algebraic variety $X[k]$ is constructed by attaching a chain of $k$ copies of the 3-fold $\mathbb{P}(N_{X/D}\oplus {\mathcal{O}}_D)$ equipped with 0-sections and $\infty$-sections $$D \stackrel{\iota_0} \longrightarrow \mathbb{P}(N_{X/D}\oplus {\mathcal{O}}_D) \stackrel{\ \iota_\infty}{\longleftarrow} D$$ defined by the summands $N_{X/D}$ and ${\mathcal{O}}_D$ respectively. The $k$-step degeneration $X[k]$ is a union $$X \cup_D \mathbb{P}(N_{X/D}\oplus {\mathcal{O}}_D)\cup_D \mathbb{P}(N_{X/D}\oplus {\mathcal{O}}_D) \cup_D \cdots \cup_D \mathbb{P}(N_{X/D}\oplus {\mathcal{O}}_D)\, ,$$ where the attachments are made along $\infty$-sections on the left and $0$-sections on the right. The original divisor $D\subset X$ is considered an $\infty$-section for the attachment rules. The rightmost component of $X[k]$ carries the last $\infty$-section, $$D_\infty \subset X[k],$$ called the {\em relative divisor}. The $k$-step degeneration also admits a canonical contraction map \begin{equation}\label{vssv} X[k] \rightarrow X \end{equation} collapsing all the attached components to $D\subset X$.
\noindent $\bullet$ The sheaf $F$ on $X[k]$ is of Euler characteristic $$\chi(F)=n$$ and has 1-dimensional support on $X[k]$ which pushes-down via the contraction \eqref{vssv} to the class $$\beta\in H_2(X,\mathbb{Z}).$$
\noindent $\bullet$ The following stability conditions are required for stable relative pairs: \begin{enumerate} \item[(i)] $F$ is pure with finite locally free resolution, \item[(ii)] the higher derived functors of the restriction of $F$ to the singular{\footnote{The singular loci of $X[k]$ , by convention, include also the relative divisor $D_\infty\subset X[k]$ even though $X[k]$ is nonsingular along $D_\infty$ as a variety. The perspective of log geometry is more natural here.}} loci of $X[k]$ vanish, \item[(iii)] the section $s$ has 0-dimensional cokernel supported away from the singular loci of $X[k]$. \item[(iv)] the pair \eqref{vyq} has only finitely many automorphisms covering the automorphisms of $X[k]/X$. \end{enumerate}
The moduli space $P_n(X/D,\beta)$ of stable relative pairs is a complete Deligne-Mumford stack equipped with a map to the Hilbert scheme of points of $D$ via the restriction of the pair to the relative divisor, $$P_n(X/D,\beta) \to \Hilb(D,\int_\beta [D])\ .$$ Cohomology classes on $\Hilb(D,\int_\beta [D])$ may thus be pulled-back to the moduli space $P_n(X/D,\beta)$.
We will use the \emph{Nakajima basis} of $H^*(\Hilb(D,\int_\beta [D]))$ indexed by a partition $\mu$ of $\int_\beta [D]$ labeled by cohomology classes of $D$. For example, the class $$
\left.\big|\mu\right\rangle \in H^*(\Hilb(D,\int_\beta [D]))\,, $$ with all cohomology labels equal to the identity,
is $\prod \mu_i^{-1}$ times the Poincar\'e dual of the closure of the subvariety formed by unions of schemes of length $$ \mu_1,\dots, \mu_{\ell(\mu)} $$ supported at $\ell(\mu)$ distinct points of $D$.
The stable pairs descendent invariants in the relative case are defined using the universal sheaf just as in the absolute case. The universal sheaf is defined here on the universal degeneration of $X/D$ over $P_n(X/D,\beta)$. The cohomology classes $\gamma_i\in H^*(X)$ are pulled-back to the universal degeneration via the contraction map \eqref{vssv}. The descendent partition function with boundary conditions $\mu$ is a Laurent series in $q$, $$\mathsf{Z}_{\mathsf{P}}
\Big( X/D;q\ \Big|\, \ \prod_{i=1}^r \tau_{k_i}(\gamma_{i})
\, \Big|\, \mu \Big)_\beta \in \mathbb{Q}((q))\, .$$ The basic rationality statement here is parallel to the absolute and equivariant cases.
\begin{Conjecture} \label{222} For $X/D$ a nonsingular projective relative 3-fold, the descendent partition function $$\mathsf{Z}_{\mathsf{P}}
\Big( X/D;q\ \Big|\, \prod_{i=1}^r \tau_{k_i}(\gamma_{i})
\, \Big|\, \mu \Big)_\beta \in \mathbb{Q}((q))\, $$ is the Laurent expansion in $q$ of a rational function in $\mathbb{Q}(q)$. \end{Conjecture}
In case $X$ is a nonsingular quasi-projective toric 3-fold and $D\subset X$ is a toric divisor, an {\em equivariant relative} stable pairs theory can be defined. The rationality conjecture then takes the form expected by combining the rationality statements in the equivariant and relative cases.
\begin{Conjecture} \label{333} For $X/D$ a nonsingular quasi-projective relative toric 3-fold, the descendent partition function $$\mathsf{Z}_{\mathsf{P}}
\Big( X/D;q\ \Big|\, \prod_{i=1}^r \tau_{k_i}(\gamma_{i})
\, \Big|\, \mu \Big)^{\mathbf{T}}_\beta \in \mathbb{Q}(s_1,s_2,s_3)(q)\, $$ is the Laurent expansion in $q$ of a rational function in $\mathbb{Q}(q,s_1, s_2,s_3)$. \end{Conjecture}
Of course, both $\gamma_i\in H^\bullet_{\mathbf{T}}(X)$ and the Nakajima basis element $$\mu \in H^*_{\mathbf{T}}(\Hilb(D,\int_\beta [D]))$$ must be taken here in equivariant cohomology. While the full statement of Conjecture \ref{333} remains open, a partial result follows from Theorem \ref{pp12} and \cite[Theorem 2]{part1} which addresses the non-equivariant limit in the projective relative toric case.
\begin{Theorem}[P.-Pixton, 2012] \label{ppp12} For $X/D$ a nonsingular projective relative toric 3-fold,
the descendent partition function
$$\ZZ_{\mathsf{P}}\Big(X;q\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i)
\, \Big|\, \mu \Big)_\beta$$ is the Laurent expansion in $q$ of a rational function in $\mathbb{Q}(q)$. \end{Theorem}
As an example of a computation in closed form in the equivariant relative case, consider the geometry of the {\em cap}, $${\mathbb C}^2 \times \mathsf{P}^1 / {\mathbb C}^2_\infty\, ,$$ where ${\mathbb C}^2_\infty \subset {\mathbb C}^2 \times \mathsf{P}^1$ is the fiber of
$${\mathbb C}^2 \times \mathsf{P}^1 \rightarrow \mathsf{P}^1$$ over $\infty \in \mathsf{P}^1$. The first two factors of the 3-torus ${\mathbf{T}}$ act on the ${\mathbb C}^2$-factor of the cap with tangent weights $-s_1$ and $-s_2$. The third factor of ${\mathbf{T}}$ acts on ${\mathsf{P}}^1$ factor of the cap with tangent weights $-s_3$ and $s_3$ at $0\in {\mathsf{P}}^1$ and $\infty\in {\mathsf{P}}^1$ respectively.
From several perspectives, the equivariant relative descendent partition function
$$\mathsf{Z}^{\mathsf{cap}}_{\mathsf{P}}( \, \tau_d(\mathsf{p})\, |\,(d) )^{\mathbf{T}}_d = \sum_{n}
\Big\langle \tau_{d}(\mathsf{p})\, \Big| \, (d) \Big\rangle_{\!n,d}^{\! \text{Cap}}\, q^n\ , \ \ \ \ d>0 \ $$ is the most important series in the cap geometry \cite{PPstat}. Here, $$\mathsf{p}\in H_{\mathbf{T}}^2({\mathbb C}^2 \times {\mathsf{P}}^1)$$ is the class of the ${\mathbf{T}}$-fixed point of ${\mathbb C}^2\times {\mathsf{P}}^1$ over $0\in {\mathsf{P}}^1$, and the Nakaijima basis element $(d)$ is weighted with the identity class in $H^*_{\mathbf{T}}(\text{Hilb}({\mathbb C}^2,d))$. A central result of \cite{PPstat} is the following calculation.{\footnote{The formula here differs from \cite{PPstat} by a factor of $s_1s_2$ since a different convention for the cohomology class $\mathsf{p}$ is taken.}}
\begin{Theorem}[P.-Pixton, 2011] \label{yty7} We have $$
\mathsf{Z}^{\mathsf{cap}}_{\mathsf{P}}( \, \tau_d(\mathsf{p})\, |\,(d) )^{\mathbf{T}}_d
= \frac{q^d}{d!}\left(\frac{s_1+s_2}{2}\right) \sum_{i=1}^d \frac{ 1+(-q)^{i}}{1-(-q)^i} \ . $$ \end{Theorem}
In the above formula, the coefficient of $q^d$, $$ \big\langle \tau_d(\mathsf{p}), (d) \big\rangle_{\text{Hilb}({\mathbb C}^2,d)}= \frac{s_1+s_2}{2\cdot (d-1)!}\, ,$$ is the classical $({\mathbb C}^*)^2$-equivariant pairing on the Hilbert scheme of points $\text{Hilb}(\mathbb{C}^2,d)$. The proof of Theorem \ref{yty7} is a rather delicate localization calculation (using several special properties such as the \`a priori divisibility of the answer by $s_1+s_2$ from the holomorphic symplectic form on $\text{Hilb}(\mathbb{C}^2,d)$).
The difficulty in Theorem \ref{yty7} is obtaining a closed form evaluation for all $d$. Any particular descendent series can be calculated by the localization methods. A calculation, for example, {\em not} covered by Theorem \ref{yty7} is \begin{multline}\label{fvvt}
\mathsf{Z}^{\mathsf{cap}}_{\mathsf{P}}( \, \tau_2(\mathsf{p})\, |\,(1) )^{\mathbf{T}}_1
= \big({2s_1^2+3s_1s_2+2s_2^2}\big)q\frac{(1+q^2)}{(1+q)^2} \\ + \big({6s_3(s_1+s_2)-2s_1^2-6s_1s_2-2s^2_2}\big)\frac{q^2}{(1+q)^2}\, . \end{multline} A simple closed formula for all descendents of the cap is unlikely to exist.
\subsection{Functional equation}\label{funk} In case $X$ is a nonsingular Calabi-Yau 3-fold, the descendent series viewed as a rational function in $q$ satisfies the symmetry \begin{equation}\label{symm}
\ZZ_{\mathsf{P}}\big(X;\frac{1}{q}\, \big|\, 1 \big)_\beta
= \ZZ_{\mathsf{P}}\big(X;{q}\, \big|\, 1 \big)_\beta \, \end{equation} as conjectured in \cite{MNOP1,pt} and proven in \cite{Bridge,Toda}. In fact, a functional equation for the descendent partition function is expected to hold in {\em all} cases (absolute, equivariant, and relative). For the relative case, the functional equation is given by the following formula{\footnote{The conjecture is stated in \cite{part1,PPstat} with a sign error: the factor of $q^{d_\beta}$ on the right side of the functional equation \cite{part1,PPstat} should be $(-q)^{d_\beta}$. Then two factors of $(-1)^{d_\beta}$ multiply to $1$ and yield Conjecture \ref{444} as stated here.}} \cite{part1,PPstat}.
\begin{Conjecture}[P.-Pixton, 2012] \label{444} For $X/D$ a nonsingular projective relative 3-fold,
the descendent series viewed as a rational function in $q$ satisfies the functional equation
$$\ZZ_{\mathsf{P}}\Big(X;\frac{1}{q}\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i)
\, \Big|\, \mu \Big)_\beta
= (-1)^{|\mu|-\ell(\mu) +
\sum_{i=1}^r k_i} q^{-d_\beta} \ZZ_{\mathsf{P}}\Big(X;{q}\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i) \, \Big|\, \mu \Big)_\beta\, $$ where the constants are
$$|\mu|=\int_\beta D\,, \ \ \ \ell(\mu)= {\text{\em length}}(\mu)\,, \ \ \ d_\beta = \int_{\beta}c_1(X)\, . $$ \end{Conjecture}
The functional equation in the absolute case is obtained by specializing the divisor $D\subset X$ to the empty set in Conjecture \ref{444}:
$$\ZZ_{\mathsf{P}}\Big(X;\frac{1}{q}\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)_\beta = (-1)^{
\sum_{i=1}^r k_i} q^{-d_\beta}\, \ZZ_{\mathsf{P}}\Big(X;{q}\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i) \, \Big)_\beta\, . $$ The functional equation in the equivariant case is conjectured to be identical,
$$\ZZ_{\mathsf{P}}\Big(X;\frac{1}{q}\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)^{\mathbf{T}}_\beta = (-1)^{
\sum_{i=1}^r k_i} q^{-d_\beta}\, \ZZ_{\mathsf{P}}\Big(X;{q}\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i) \, \Big)^{\mathbf{T}}_\beta\, .$$ Finally, in the equivariant relative case, the functional equation expected to be same as in Conjecture \ref{444}.
As an example, the descendent series for the cap evaluated in Theorem \ref{yty7} satisfies the conjectured functional equation: \begin{eqnarray*} \mathsf{Z}^{\mathsf{cap}}_{\mathsf{P}}\left(\frac{1}{q};
\, \tau_d(\mathsf{p})\, \Big|\,(d) \right)^{\mathbf{T}}_d & = & \frac{q^{-d}}{d!}\left(\frac{s_1+s_2}{s_1s_2}\right) \frac{1}{2}\sum_{i=1}^d \frac{ 1+(-q)^{-i}}{1-(-q)^{-i}} \\ & = & \frac{1}{q^{2d}}\frac{q^{d}}{d!}\left(\frac{s_1+s_2}{s_1s_2}\right) \frac{1}{2}\sum_{i=1}^d \frac{ (-q)^{i}+1}{(-q)^{i}-1} \\ & = & \frac{(-1)^{d-1+d}}{q^{2d}} \mathsf{Z}^{\mathsf{cap}}_{\mathsf{P}}\big(q;
\, \tau_d(\mathsf{p})\, |\,(d) \big)^{\mathbf{T}}_d\, . \end{eqnarray*} Here, the constants for the exponent of $(-1)$ in the functional equation are
$$|(d)|=d\,, \ \ \ \ell(d)= 1\,, \ \ \ d_\beta = 2d\, . $$ It is straightforward to check the functional equation in all the examples of Section \ref{fex} - \ref{dd22}.
The evidence for the functional equation for descendent series
is not as large as for the rationality. For the equivariant relative cap, the functional equation is proven in \cite{PPstat} for all descendents series
$$\mathsf{Z}^{\mathsf{cap}}_{\mathsf{P}}\left( \, \prod_{i=1}^r\tau_{k_i}(\mathsf{p})\, \Big|\,(\mu) \right)^{\mathbf{T}}_d$$
{\em after} the specialization $s_3=0$. The predicted functional equation for
$$\mathsf{Z}^{\mathsf{cap}}_{\mathsf{P}}( \, \tau_2(\mathsf{p})\, |\,(1) )^{\mathbf{T}}_1$$
{\em before} the specialization $s_3=0$ can be easily checked from the formula \eqref{fvvt}. The functional equation is also known to hold for a special classes of descendent insertions in the nonsingular projective toric case \cite{PPDC} as will be discussed in Section \ref{eee999}.
\subsection{Pole constraints} \label{polec}
Let $X$ be a nonsingular projective 3-fold, and let $\beta\in H_2(X,\mathbb{Z})$ be an nonzero class.
For $\beta$ to be an effective curve class, the image of $\beta$ in the lattice \begin{equation}\label{tttt} H_2(X,\mathbb{Z})/\text{\text{torsion}} \end{equation} must also be nonzero. Let $\text{div}(\beta)\in \mathbb{Z}_{>0}$ be the divisibility of the image of $\beta$ in the lattice \eqref{tttt}.
\begin{Conjecture} \label{555} For $d=\text{\em div}(\beta)$, the poles in $q$ of the rational function
$$\ZZ_{P}\Big(X;q\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)_\beta$$ may occur only at $q=0$ and the roots of the polynomials
$$\{ \, 1-(-q)^m \, | \, 1 \leq m \leq d\, \}.$$ \end{Conjecture}
Of the above conjectures, the evidence for Conjecture \ref{555} is the weakest. The prediction is based on a study of the stable pairs theory of local curves where the above pole restrictions are always found. For example, the evaluation of Theorem \ref{yty7} is consistent with the pole statement (even though Theorem \ref{yty7} concerns the equivariant relative case). A promotion of Conjecture \ref{555} to cover all cases also appears reasonable.
\subsection{Complete intersections}\label{compint} Rationality results for non-toric 3-folds are proven in \cite{PPQ} by degeneration methods for several geometries. The simplest to state concern nonsingular complete intersections of ample divisors $$ X \subset {\mathsf{P}}^{n_1} \times \cdots \times {\mathsf{P}}^{n_m}\ .$$
\begin{Theorem}[P.-Pixton, 2012] \label{qqq111f} Let $X$ be a nonsingular Fano or Calabi-Yau complete intersection 3-fold in a product of projective spaces. For even classes $\gamma_i \in H^{2*}(X)$, the descendent partition function
$$\ZZ_{\mathsf{P}}\Big(X;q\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)_\beta$$ is the Laurent expansion of a rational function in $\mathbb{Q}(q)$. \end{Theorem}
By the Lefschetz hyperplane result, the even cohomology of such $X$ is exactly the image of the restricted cohomology from the product of projective spaces. Theorem \ref{qqq111f} does {\em not} cover the primitive cohomology in $H^3(X)$. Moreover, even for descendents of the even cohomology $H^{2*}(X)$ the functional equation and pole conjectures are open.
\section{Gromov-Witten/Pairs correspondence} \label{222r}
\subsection{Overview} Let $X$ be a nonsingular projective variety. Descendent classes on the moduli spaces of stable maps $\overline{M}_{g,r}(X,\beta)$ in Gromov-Witten theory, defined using cotangent lines at the marked points,
have played a central role since the beginning of the subject in the early 90s. Topological recursion relations, $J$-functions, and Virasoro constraints all essentially concern descendents. The importance of descendents in Gromov-Witten theory was hardly a surprise: cotangent lines on the moduli spaces $\overline{M}_{g,r}$ of stable curves were basic to their geometric study
before Gromov-Witten theory was developed.
In case $X$ is a nonsingular projective {\em 3-fold}, descendent invariants are defined for both
Gromov-Witten theory and the theory of stable pairs. The geometric constructions are rather different, but a surprising correspondence conjecturally holds: the two descendent theories are related by a universal correspondence for {\em all} nonsingular projective 3-folds. In order words, the two descendent theories contain exactly the same data.
The origin of the Gromov-Witten/Pairs correspondence is found in the study of ideal sheaves in \cite{MNOP1, MNOP2}. Since the descendent theory of stable pairs is much better behaved, the results and conjectures take a better form for stable pairs \cite{PPDC,PPQ}.
The rationality results and conjectures of Section \ref{111r} are needed for the statement of the Gromov-Witten/Pairs correspondence. Just as in Section \ref{111r}, we present the absolute, equivariant, and relative cases. A more subtle discussion of diagonals is required for the relative case.
\subsection{Descendents in Gromov-Witten theory} Let $X$ be a nonsingular projective 3-fold. Gromov-Witten theory is defined via integration over the moduli space of stable maps. Let
$\overline{M}_{g,r}(X,\beta)$ denote the moduli space of $r$-pointed stable maps from connected genus $g$ curves to $X$ representing the class $\beta\in H_2(X, \mathbb{Z})$. Let $$\text{ev}_i: \overline{M}_{g,r}(X,\beta) \rightarrow X\, ,$$ $$ {\mathbb{L}}_i \rightarrow \overline{M}_{g,r}(X,\beta)$$ denote the evaluation maps and the cotangent line bundles associated to the marked points. Let $\gamma_1, \ldots, \gamma_r\in H^*(X)$, and let $$\psi_i = c_1({\mathbb{L}}_i) \in H^2(\overline{M}_{g,n}(X,\beta))\, .$$ The {\em descendent fields}, denoted by $\tau_k(\gamma)$, correspond to the classes $\psi_i^k \text{ev}_i^*(\gamma)$ on the moduli space of stable maps. Let $$\Big\langle \tau_{k_1}(\gamma_{1}) \cdots \tau_{k_r}(\gamma_{r})\Big\rangle_{g,\beta} = \int_{[\overline{M}_{g,r}(X,\beta)]^{vir}} \prod_{i=1}^r \psi_i^{k_i} \text{ev}_i^*(\gamma_{_i})$$ denote the descendent Gromov-Witten invariants. Foundational aspects of the theory are treated, for example, in \cite{BehFan, LiTian}.
Let $C$ be a possibly disconnected curve with at worst nodal singularities. The genus of $C$ is defined by $1-\chi({\mathcal O}_C)$. Let $\overline{M}'_{g,r}(X,\beta)$ denote the moduli space of maps with possibly {disconnected} domain curves $C$ of genus $g$ with {\em no} collapsed connected components. The latter condition requires
each connected component of $C$ to represent a nonzero class in $H_2(X,{\mathbb Z})$. In particular, $C$ must represent a {nonzero} class $\beta$.
We define the descendent invariants in the disconnected case by $$\Big\langle \tau_{k_1}(\gamma_{1}) \cdots \tau_{k_r}(\gamma_{r})\Big\rangle'_{g,\beta} = \int_{[\overline{M}'_{g,r}(X,\beta)]^{vir}} \prod_{i=1}^r \psi_i^{k_i} \text{ev}_i^*(\gamma_{i}).$$ The associated partition function is defined by{\footnote{Our notation follows \cite{MNOP2,moop} and emphasizes the role of the moduli space $\overline{M}'_{g,r}(X,\beta)$. The degree 0 collapsed contributions will not appear anywhere in the paper.}}
\begin{equation} \label{abc}
\mathsf{Z}'_{\mathsf{GW}}\Big(X;u\ \Big|\ \prod_{i=1}^r \tau_{k_i}(\gamma_{i})\Big)_\beta = \sum_{g\in{\mathbb Z}} \Big \langle \prod_{i=1}^r \tau_{k_i}(\gamma_{i}) \Big \rangle'_{g,\beta} \ u^{2g-2}. \end{equation} Since the domain components must map nontrivially, an elementary argument shows the genus $g$ in the sum \eqref{abc} is bounded from below.
\subsection{Dimension constraints} Descendents in Gromov-Witten and stable pairs theories are obtained via tautological structures over the moduli spaces $$\overline{M}'_{g,r}(X,\beta)\, , \ \ \ \ P_{n}(X,\beta)\times X$$ respectively. The descendents $\tau_k(\gamma)$ in both cases mix the characteristic classes of the tautological sheaves $${\mathbb{L}}_i \rightarrow \overline{M}'_{g,r}(X,\beta)\, , \ \ \ \ \mathbb{F} \rightarrow P_{n}(X,\beta)\times X$$
with the pull-back of $\gamma\in H^*(X)$ via the evaluation/projective morphism.
In the absolute (nonequivariant) case, the Gromov-Witten and stable pairs descendent series \begin{equation}\label{fhh6}
\mathsf{Z}'_{\mathsf{GW}}\Big(X;u\ \Big|\ \prod_{i=1}^r \tau_{k_i}(\gamma_{i})\Big)_\beta\, , \ \ \ \
\mathsf{Z}_{\mathsf{P}}\Big(X;q\ \Big|\ \prod_{i=1}^r \tau_{k_i}(\gamma_{i})\Big)_\beta \end{equation} both satisfy dimension constraints. For $\gamma_i\in H^{e_i}(X)$, the (real) dimension of the descendents Gromov-Witten and stable pairs theories are $$\tau_{k_i}(\gamma_i)\in H^{{e_i}+2k_i}(\overline{M}'_{g,r}(X,\beta))\, , \ \ \ \ \tau_{k_i}(\gamma_i)\in H^{{e_i}+2k_i-2}(P_{n}(X,\beta))\, .$$ Since the virtual dimensions are $$\text{dim}_{\mathbb{C}} \, [\overline{M}'_{g,r}(X,\beta)]^{vir} = \int_\beta c_1(T_X) + r\, , \ \ \ \ \text{dim}_{\mathbb{C}} \, [P_n(X,\beta)]^{vir} = \int_\beta c_1(T_X) $$ respectively, the dimension constraints $$\sum_{i=1}^r \frac{e_i}{2}+k_i = \int_\beta c_1(T_X) + r \, , \ \ \ \ \sum_{i=1}^r \frac{e_i}{2}+k_i -1 = \int_\beta c_1(T_X) $$ exactly match.
After the matching of the dimension constraints, we can further reasonably ask if there is a relationship between the Gromov-Witten and stable pairs descendent series \eqref{fhh6}. The question has two immediately puzzling features: \begin{enumerate} \item[(i)] The series involve different moduli spaces and universal structures. \item[(ii)] The variables $u$ and $q$ of the two series are different. \end{enumerate} Though the worry (i) is correct, both moduli spaces are essentially based upon the geometry of curves in $X$, so there is hope for a connection. The {\em descendent correspondence} proposes a precise relationship between the Gromov-Witten and stable pairs descendent series, but only after a change of variables to address (ii).
\subsection{Descendent notation}
Let $X$ be a nonsingular projective 3-fold.
Let $\widehat{\alpha}=(\widehat{\alpha}_1,\ldots, \widehat{\alpha}_{\widehat{\ell}})$, $$\widehat{\alpha}_1\geq \ldots\geq \widehat{\alpha}_{\widehat{\ell}} > 0\, ,$$ be a partition
of size $|\widehat{\alpha}|$ and length $\widehat{\ell}$. Let $$\iota_\Delta:\Delta\rightarrow X^{\widehat{\ell}}$$
be the inclusion of the small diagonal{\footnote{The small diagonal $\Delta$ is the set of points of $X^{\widehat{\ell}}$ for which the coordinates $(x_1,\ldots, x_{\hat{\ell}})$ are all equal $x_i=x_j$.}} in the product $X^{\widehat{\ell}}$. For $\gamma\in H^*(X)$, we write $$\gamma\cdot \Delta =\iota_{\Delta*}(\gamma) \in H^*(X^{\widehat{\ell}})\, .$$ Using the K\"unneth decomposition, we have $$\gamma\cdot \Delta= \sum_{{j_1, \ldots, j_{\hat{\ell}}}} c^\gamma_{j_1,\ldots, j_{\hat{\ell}}}\, \theta_{j_1} \otimes \ldots\otimes \theta_{j_{\hat{\ell}}}\, ,$$ where $\{\theta_j\}$ is a $\mathbb{Q}$-basis of $H^*(X)$. We define the descendent insertion $\tau_{\widehat{\alpha}}(\gamma)$ by \begin{equation}\label{j77833} \tau_{\widehat{\alpha}}(\gamma)= \sum_{j_1,\ldots,j_{\hat{\ell}}} c^\gamma_{j_1,\ldots, j_{\hat{\ell}}}\, \tau_{\widehat{\alpha}_1-1}(\theta_{j_1}) \cdots\tau_{\widehat{\alpha}_{\hat{\ell}}-1}(\theta_{j_{\hat{\ell}}})\ . \end{equation} Three basic examples are: \begin{enumerate} \item[$\bullet$] If $\widehat{\alpha}=(\widehat{a}_1)$, then $$\tau_{(\, \widehat{a}_1\,)}(\gamma)= \tau_{\widehat{a}_1-1}(\gamma)\, .$$ The convention of shifting the descendent by $1$ allows us to index descendent insertions by standard partitions $\widehat{\alpha}$
and follows the notation of \cite{PPDC}.
\item[$\bullet$] If $\widehat{\alpha}=(\widehat{a}_1,\widehat{a}_2)$ and $\gamma=1$ is the identity class, then $$\tau_{(\, \widehat{a}_1,\, \widehat{a}_2\, )}(1)= \sum_{j_1,j_2} c^1_{j_1,j_2} \tau_{\widehat{a}_1-1}(\theta_{j_1})\, \tau_{\widehat{a}_2-1}(\theta_{j_2})\, ,$$ where $\Delta = \sum_{j_1,j_2} c^1_{j_1,j_2}\, \theta_{j_1} \otimes \theta_{j_2}$ is the standard K\"unneth decomposition of the diagonal in $X^2$. \item[$\bullet$] If $\gamma$ is the class of a point, then \[ \tau_{\widehat{\alpha}}(\mathsf{p})= \tau_{\widehat{\alpha}_1-1}(\mathsf{p})\cdots\tau_{\widehat{\alpha}_{\hat{\ell}}-1}(\mathsf{p}). \] \end{enumerate} By the multilinearity of descendent insertions, formula \eqref{j77833} does not depend upon the basis choice $\{\theta_j\}$.
\subsection{Correspondence matrix} \label{corrmat}
A central result of \cite{PPDC} is the construction of a universal correspondence matrix $\widetilde{\mathsf{K}}$ indexed by partitions $\alpha$ and $\widehat{\alpha}$ of positive size with{\footnote{Here, $i^2=-1$.}} $$\widetilde{\mathsf{K}}_{\alpha,\widehat{\alpha}}\in \mathbb{Q}[i,c_1,c_2,c_3]((u))\, . $$ The elements of $\widetilde{\mathsf{K}}$ are constructed from the capped descendent vertex \cite{PPDC} and satisfy two basic properties:
\begin{enumerate} \item[(i)] The vanishing $\widetilde{\mathsf{K}}_{\alpha,\widehat{\alpha}}=0$ holds {unless}
$|{\alpha}|\geq |\widehat{\alpha}|$. \item[(ii)] The $u$ coefficients of $\widetilde{\mathsf{K}}_{\alpha,\widehat{\alpha}}\in \mathbb{Q}[i,c_1,c_2,c_3]((u))$ are homogeneous{\footnote{The variable $c_i$ has degree $i$ for the homogeneity.}} in the variables $c_i$
of degree $$|\alpha|+\ell(\alpha) - |\widehat{\alpha}| - \ell(\widehat{\alpha})-3(\ell(\alpha)-1).$$ \end{enumerate}
Via the substitution \begin{equation} \label{h3492} c_i=c_i(T_X), \end{equation} the matrix elements of $\widetilde{\mathsf{K}}$ act by cup product on the cohomology
of $X$ with $\mathbb{Q}[i]((u))$-coefficients.
The matrix $\widetilde{\mathsf{K}}$ is used to define a correspondence rule \begin{equation}\label{pddff} {\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})}\ \ \mapsto\ \ \overline{\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})}\ . \end{equation} The definition of the right side of \eqref{pddff} requires a sum over all set partitions $P$ of $\{ 1,\ldots, \ell \}$.
For such a set partition $P$, each element $S\in P$ is a subset of $\{1,\ldots, \ell\}$. Let $\alpha_S$ be the associated subpartition of $\alpha$, and let $$\gamma_S = \prod_{i\in S}\gamma_i.$$ In case all cohomology classes $\gamma_j$ are even, we define the right side of the correspondence rule \eqref{pddff} by \begin{equation}\label{mqq23} \overline{\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})} = \sum_{P \text{ set partition of }\{1,\ldots,\ell\}}\ \prod_{S\in P}\ \sum_{\widehat{\alpha}}\tau_{\widehat{\alpha}}(\widetilde{\mathsf{K}}_{\alpha_S,\widehat{\alpha}}\cdot\gamma_S) \ . \end{equation} The second sum in \eqref{mqq23} is over all partitions $\widehat{\alpha}$ of positive size. However, by the vanishing of property (i), $$\widetilde{\mathsf{K}}_{\alpha_S,\widehat{\alpha}}=0 \ \ \text{unless}
\ \ |{\alpha_S}|\geq |\widehat{\alpha}|\, , $$
the summation index may be restricted to partitions $\widehat{\alpha}$ of positive size bounded by $|\alpha_S|$.
Suppose $|\alpha_S|=|\widehat{\alpha}|$ in the second sum in \eqref{mqq23}. The homogeneity property (ii) then places a strong constraint. The $u$ coefficients of \begin{equation*} \widetilde{\mathsf{K}}_{\alpha_S,\widehat{\alpha}}\in \mathbb{Q}[i,c_1,c_2,c_3]((u)) \end{equation*} are homogeneous of degree \begin{equation}\label{d2399} 3-2\ell(\alpha_S) - \ell(\widehat{\alpha}) \, . \end{equation} For the matrix element $\widetilde{\mathsf{K}}_{\alpha_S,\widehat{\alpha}}$ to be nonzero, the degree \eqref{d2399} must be nonnegative. Since the lengths of $\alpha_S$ and $\widehat{\alpha}$ are at least 1, nonnegativity of \eqref{d2399} is only possible if $$\ell(\alpha_S)= \ell(\widehat{\alpha})=1\, .$$ Then, we also have $\alpha_S=\widehat{\alpha}$ since the sizes match.
The above argument shows that the descendents on the right side of \eqref{mqq23}
all correspond to partitions of size {\em less} than $|\alpha|$ except for the {\em leading term} obtained from the the maximal set partition $$\{1\} \cup \{2\} \cup \ldots \cup \{\ell\} = \{1,2,\ldots, \ell\}$$ in $\ell$ parts. The leading term of the descendent correspondence, calculated in \cite{PPDC}, is a third basis property of $\widetilde{\mathsf{K}}$:
\begin{enumerate} \item[(iii)] $\ \ \overline{\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})}
= (iu)^{\ell(\alpha)-|\alpha|}\, \tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell}) +\ldots .$ \end{enumerate}
In case $\alpha=1^\ell$ has all parts equal to 1, then $\alpha_S$ also has all parts equal to 1 for every $S\in P$. By property (ii), the $u$ coefficients of $\widetilde{\mathsf{K}}_{\alpha_S,\widehat{\alpha}}$ are homogeneous of degree
$$3-\ell(\alpha_S) - |\widehat{\alpha}| - \ell(\widehat{\alpha}),$$ and hence vanish unless $$\alpha_S= \widehat{\alpha}=(1)\ .$$ Therefore, if $\alpha$ has all parts equal to $1$, the leading term is therefore the entire formula We obtain a fourth property of the matrix $\widetilde{\mathsf{K}}$:
\begin{enumerate} \item[(iv)] $\ \overline{\tau_{0}(\gamma_1)\cdots \tau_{0}(\gamma_{\ell})} = \tau_{0}(\gamma_1)\cdots \tau_{0}(\gamma_{\ell})\, .$ \end{enumerate}
In the presence of odd cohomology, a natural sign must be included in formula \eqref{mqq23}. We may write set partitions $P$ of $\{1,\ldots, \ell\}$ indexing the sum on the right side of \eqref{mqq23} as
$$S_1\cup \ldots \cup S_{|P|} = \{1,\ldots, \ell\}.$$ The parts $S_i$ of $P$ are unordered, but we choose an ordering for each $P$. We then obtain a permutation of $\{1, \ldots, \ell\}$ by moving the elements to the ordered parts $S_i$ (and respecting the original order in each group). The permutation, in turn, determines a sign $\sigma(P)$ determined by the anti-commutation of the associated odd classes. We then write \begin{equation*} \overline{\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})} = \sum_{P \text{ set partition of }\{1,\ldots,\ell\}}\ (-1)^{\sigma(P)} \prod_{S_i\in P}\ \sum_{\widehat{\alpha}}\tau_{\widehat{\alpha}}(\widetilde{\mathsf{K}}_{\alpha_{S_i},\widehat{\alpha}} \cdot\gamma_{S_i}) \ . \end{equation*} The descendent $\overline{\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})}$ is easily seen to have the same commutation rules with respect to odd cohomology as ${\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})}$.
The geometric construction of $\widetilde{\mathsf{K}}$ in \cite{PPDC} expresses the coefficients explicitly in terms of the 1-legged capped descendent vertex for stable pairs and stable maps. These vertices can be computed (as a rational function in the stable pairs case and term by term in the genus parameter for stable maps). Hence, the coefficient $$\widetilde{\mathsf{K}}_{\alpha,\widehat{\alpha}}\in \mathbb{Q}[i,c_1,c_2,c_3]((u))$$ can, in principle, be calculated term by term in $u$. The calculations in practice are quite difficult, and
complete closed formulas are not known for all of the coefficients.
\subsection{Absolute case} To state the descendent correspondence proposed in \cite{PPDC} for all nonsingular projective 3-folds $X$, the basic degree $$d_\beta = \int_{\beta} c_1(X) \ \in \mathbb{Z}$$ associated to the class $\beta\in H_2(X,\mathbb{Z})$ is required.
\begin{Conjecture}[P.-Pixton (2011)] \label{ttt222} Let $X$ be a nonsingular projective 3-fold. For $\gamma_i \in H^{*}(X)$, we have \begin{multline*}
(-q)^{-d_\beta/2}\ZZ_{\mathsf{P}}\Big(X;q\ \Big| {\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})} \Big)_\beta \\ =
(-iu)^{d_\beta}\ZZ'_{\mathsf{GW}}\Big(X;u\ \Big| \ \overline{\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})}\ \Big)_\beta \end{multline*} under the variable change $-q=e^{iu}$. \end{Conjecture}
Since the stable pairs side of the correspondence
$$\ZZ_{\mathsf{P}}\Big(X;q\ \Big| {\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})} \Big)_\beta\, \in \mathbb{Q}((q))$$
is defined as a series in $q$, the change of variable $-q=e^{iu}$ is {\em not} \`a priori well-defined. However, the stable pairs descendent series is predicted by Conjecture \ref{111} to be a rational function in $q$. The change of variable $-q=e^{iu}$ is well-defined for a rational function in $q$ by substitution. The well-posedness of Conjecture \ref{ttt222} therefore depends upon Conjecture \ref{111}.
\subsection{Geometry of descendents}\label{ggogg} Let $X$ be a nonsingular projective 3-fold, and let
$D\subset X$ be a nonsingular divisor. The Gromov-Witten descendent insertion $\tau_1(D)$ has a simple geometric leading term. Let $$[f:(C,p) \rightarrow X] \in \overline{M}_{g,1}(X,\beta)$$ be a stable map. Let $$\text{ev}_1: \overline{M}_{g,1}(X,\beta) \rightarrow X$$ be the evaluation map at the marking. The cycle $$\text{ev}^{-1}_1(D) \subset \overline{M}_{g,1}(X,\beta)$$ corresponds to stable maps with $f(p)\in D$. On the locus $\text{ev}^{-1}_1(D)$, there is a differential \begin{equation}\label{sgg47} df: T_{C,p} \rightarrow N_{X/D,f(p)} \end{equation} from the tangent space of $C$ at $p$ to the normal space of $D\subset X$ at $f(p)\in D$. The differential \eqref{sgg47} on $\text{ev}^{-1}_1(D)$ vanishes on the locus where $f(C)$ is {\em tangent} to $D$ at $p$. In other words, $$\tau_{1}(D)+\tau_0(D^2) = \text{ev}_1^{-1}(D) \left( -c_1(T^*_{C,p})+ \text{ev}_1^*(N_{X/D})\right)$$ has the tangency cycle as a leading term. There are correction terms from the loci where $p$ lies on a component of $C$ contracted by $f$ to a point of $D$.
A parallel relationship can be pursued for $\tau_k(D)$ for for higher $k$ in terms of the locus of stable maps with higher tangency along $D$ at $f(p)$. A full correction calculus in case $X$ has dimension 1 (instead of 3) was found in \cite{OPGWH}. The method has also been successfully applied to calculate the characteristic numbers of curves in $\mathsf{P}^2$ for genus at most 2 in \cite{KGP}.{\footnote{In higher genus, the correction calculus in $\mathsf{P}^2$ was too complicated to easily control.}}
By the Gromov-Witten/Pairs correspondence of Conjecture \ref{ttt222}, the stable pairs descendent $\tau_k(D)$ has leading term on the Gromov-Witten side $$\overline{\tau_k(D)} = (iu)^{-k} \tau_{k}(D) + \ldots \, .$$ Hence, the descendents $\tau_k(D)$ on the stable pairs side should be viewed as essentially connected to the tangency loci associated to the divisor $D\subset X$.
\subsection{Equivariant case}\label{eee999} If $X$ is a nonsingular quasi-projective toric 3-fold, all terms of the descendent correspondence have ${\mathbf{T}}$-equivariant interpretations. We take the equivariant K\"unneth decomposition in \eqref{j77833}, and the equivariant Chern classes $c_i(T_X)$ with respect to the canonical ${\mathbf{T}}$-action on $T_X$ in \eqref{h3492}. The toric case is proven in \cite{PPDC}.
\begin{Theorem}[P.-Pixton, 2011] Let \label{tt66} $X$ be a nonsingular quasi-projective toric 3-fold. For $\gamma_i \in H^{*}_{\mathbf{T}}(X)$, we have \begin{multline*}
(-q)^{-d_\beta/2}\ZZ_{\mathsf{P}}\Big(X;q\ \Big| {\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})} \Big)^{\mathbf{T}}_\beta \\ =
(-iu)^{d_\beta}\ZZ'_{\mathsf{GW}}\Big(X;u\ \Big| \ \overline{\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})}\ \Big)^{\mathbf{T}}_\beta \end{multline*} under the variable change $-q=e^{iu}$. \end{Theorem}
Since the stable pairs side of the correspondence
$$\ZZ_{\mathsf{P}}\Big(X;q\ \Big| {\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})} \Big)^{\mathbf{T}}_\beta\, \in \mathbb{Q}(s_1,s_2,s_3)((q))$$
is a rational function in $q$ by Theorem \ref{pp12}, the change of variable $-q=e^{iu}$ is well-defined by substitution.
When $X$ is a nonsingular projective toric $3$-fold, Theorem \ref{tt66} implies Conjecture \ref{ttt222} for $X$ by taking the non-equivariant limit. However, Theorem \ref{tt66} is much stronger in the toric case than Conjecture \ref{ttt222} since the descendent insertions may exceed the virtual dimension in equivariant cohomology.
In case $\alpha=(1)^\ell$ has all parts equal to 1, Theorem \ref{tt66} specializes by property (iv) of Section \ref{corrmat} to the simpler statement \begin{multline}\label{pp889}
(-q)^{-d_\beta/2}\ZZ_{\mathsf{P}}\Big(X;q\ \Big|\, {\tau_{0}(\gamma_1)\cdots \tau_{0}(\gamma_{\ell})} \Big)^{\mathbf{T}}_\beta \\ =
(-iu)^{d_\beta}\ZZ'_{\mathsf{GW}}\Big(X;u\ \Big| \, {\tau_{0}(\gamma_1)\cdots \tau_{0}(\gamma_{\ell})}\, \Big)^{\mathbf{T}}_\beta \end{multline} which was first proven in the context of ideal sheaves in \cite{moop}. Viewing both sides of \eqref{pp889} as series in $u$, we can complex conjugate the coefficients. Imaginary numbers only occur in $$-q = e^{iu} \ \ \ \ \text{and} \ \ \ \ (-iu)^{d_\beta}\, .$$ After complex conjugation, we find \begin{multline*}
(-q)^{d_\beta/2}\ZZ_{\mathsf{P}}\Big(X;\frac{1}{q}\ \Big| \, {\tau_{0}(\gamma_1)\cdots \tau_{0}(\gamma_{\ell})}\, \Big)^{\mathbf{T}}_\beta \\ =
(iu)^{d_\beta}\ZZ'_{\mathsf{GW}}\Big(X;u\ \Big| \, {\tau_{0}(\gamma_1)\cdots \tau_{0}(\gamma_{\ell})}\, \Big)^{\mathbf{T}}_\beta \end{multline*} and thus obtain the functional equation \begin{equation*}
\ZZ_{\mathsf{P}}\Big(X;\frac{1}{q}\, \Big|\, {\tau_{0}(\gamma_1)\cdots \tau_{0}(\gamma_{\ell})}\, \Big)^{\mathbf{T}}_\beta =
q^{-d_\beta}\ZZ_{\mathsf{P}}\Big(X;q\, \Big| \, {\tau_{0}(\gamma_1)\cdots \tau_{0}(\gamma_{\ell})}\, \Big)^{\mathbf{T}}_\beta \end{equation*} as predicted by Conjecture \ref{444}.
\subsection{Relative case} \subsubsection{Relative Gromov-Witten theory} Let $X$ be a nonsingular projective 3-fold with a nonsingular divisor $$D\subset X\, .$$ The relative theory of stable pairs was discussed in Section \ref{relth}. A parallel relative Gromov-Witten theory of stable maps with specified tangency along the divisor $D$ can also be defined.
In Gromov-Witten theory, relative conditions are represented by a partition $\mu$ of the integer $ \int_\beta [D], $ each part $\mu_i$ of which is marked by a cohomology class $\delta_i\in H^*(D,\mathbb{Z})$, \begin{equation} \label{mm33} \mu=( (\mu_1,\delta_1), \ldots, (\mu_\ell,\delta_\ell))\, . \end{equation} The numbers $\mu_i$ record the multiplicities of intersection with $D$ while the cohomology labels $\delta_i$ record where the tangency occurs. More precisely, let $\overline{M}_{g,r}'(X/D,\beta)_\mu$ be the moduli space of stable relative maps with tangency conditions $\mu$ along $D$. To impose the full boundary condition, we pull-back the classes $\delta_i$ via the evaluation maps \begin{equation}\label{gtth341} \overline{M}_{g,r}'(X/D,\beta)_\mu \to D \end{equation} at the points of tangency.
Also, the tangency points are considered to be unordered.{\footnote{The evaluation maps are well-defined only after ordering the points. We define the theory first with ordered tangency points. The unordered theory is then defined by dividing by the automorphisms of the cohomology weighted partition $\mu$.}}
Relative Gromov-Witten theory was defined before the study of stable pairs. For the foundations, including the definition of the moduli space of stable relative maps and the construction of the virtual class $$[\overline{M}_{g,r}'(X/D,\beta)_\mu] \in H_*(\overline{M}_{g,r}'(X/D,\beta)_\mu)\, ,$$ we refer the reader to \cite{LR,L}.
\subsubsection{Diagonal classes}\label{diagclas} Definition \eqref{mqq23} of the Gromov-Witten/Pairs correspondence in the absolute case involves the diagonal $$\iota_\Delta:\Delta\rightarrow X^s$$ via \eqref{j77833}. For the correspondence in the relative case, the diagonal has a more subtle definition.
For the absolute geometry $X$, the product $X^s$ naturally parameterizes $s$ ordered (possibly coincident) points on $X$. For the relative geometry $X/D$, the parallel object is the moduli space $(X/D)^s$ of $s$ ordered (possibly coincident) points $$(p_1,\ldots, p_s) \in X/D\, .$$ The points parameterized by $(X/D)^s$ are not allowed to lie on the relative divisor $D$. When the points approach $D$, the target $X$ degenerates. The resulting moduli space $(X/D)^s$ is a nonsingular variety.
Let $$\Delta_{\mathsf{rel}} \subset (X/D)^s$$ be the small diagonal where all the points $p_i$ are coincident. As a variety, $\Delta_{\mathsf{rel}}$ is isomorphic to $X$.
The space $(X/D)^s$ is a special case of well-known constructions in relative geometry. For example, $(X/D)^2$ consists of 6 strata:
\begin{picture}(150,150)(-120,-5) \thicklines
\put(10,10){\line(1,0){100}} \put(10,110){\line(1,0){100}}
\put(10,10){\line(0,1){100}} \put(110,10){\line(0,1){100}}
\put(25,80){$1\bullet$} \put(75,60){$2\bullet$} \put(55,20){$X$} \put(115,20){$D$} \end{picture}
\begin{picture}(150,150)(0,-5) \thicklines
\put(10,10){\line(1,0){100}} \put(10,110){\line(1,0){100}}
\put(10,10){\line(0,1){100}} \put(110,10){\line(0,1){100}}
\put(110,110){\line(2,1){40}} \put(110,10){\line(2,1){40}} \put(150,30){\line(0,1){100}} \put(155,40){$D$}
\put(120,80){$1\bullet$} \put(75,60){$2\bullet$} \put(55,20){$X$}
\put(210,10){\line(1,0){100}} \put(210,110){\line(1,0){100}}
\put(210,10){\line(0,1){100}} \put(310,10){\line(0,1){100}}
\put(310,110){\line(2,1){40}} \put(310,10){\line(2,1){40}} \put(350,30){\line(0,1){100}} \put(355,40){$D$}
\put(225,80){$1\bullet$} \put(325,60){$2\bullet$} \put(255,20){$X$}
\end{picture}
\begin{picture}(150,150)(-100,-5) \thicklines
\put(10,10){\line(1,0){100}} \put(10,110){\line(1,0){100}}
\put(10,10){\line(0,1){100}} \put(110,10){\line(0,1){100}}
\put(110,110){\line(2,1){40}} \put(110,10){\line(2,1){40}} \put(150,30){\line(0,1){100}} \put(155,40){$D$}
\put(120,80){$1\bullet$} \put(120,50){$2\bullet$} \put(55,20){$X$}
\end{picture}
\begin{picture}(150,150)(-80,-5) \thicklines
\put(10,10){\line(1,0){100}} \put(10,110){\line(1,0){100}}
\put(10,10){\line(0,1){100}} \put(110,10){\line(0,1){100}}
\put(110,110){\line(2,1){40}} \put(110,10){\line(2,1){40}} \put(150,30){\line(0,1){100}}
\put(150,30){\line(2,1){40}} \put(150,130){\line(2,1){40}} \put(190,50){\line(0,1){100}} \put(195,60){$D$}
\put(160,80){$1\bullet$} \put(120,50){$2\bullet$} \put(55,20){$X$}
\thicklines
\end{picture}
\begin{picture}(150,150)(-80,-5) \thicklines
\put(10,10){\line(1,0){100}} \put(10,110){\line(1,0){100}}
\put(10,10){\line(0,1){100}} \put(110,10){\line(0,1){100}}
\put(110,110){\line(2,1){40}} \put(110,10){\line(2,1){40}} \put(150,30){\line(0,1){100}}
\put(150,30){\line(2,1){40}} \put(150,130){\line(2,1){40}} \put(190,50){\line(0,1){100}} \put(195,60){$D$}
\put(160,80){$2\bullet$} \put(120,50){$1\bullet$} \put(55,20){$X$}
\thicklines
\end{picture}
\noindent As a variety, $(X/D)^2$ is the blow-up of $X^2$ along $D^2$. And, $\Delta_{\mathsf{rel}} \subset (X/D)^2$ is the strict transform of the standard diagonal.
Select a subset $S$ of cardinality $s$ from the $r$ markings of the moduli space of maps. Just as $\overline{M}_{g,r}'(X,\beta)$ admits a canonical evaluation to $X^s$ via the selected markings, the moduli space $\overline{M}_{g,r}'(X/D,\beta)_\mu$ admits a canonical evaluation
$$\text{ev}_S: \overline{M}_{g,r}'(X/D,\beta)_\mu \rightarrow (X/D)^s , $$ well-defined by the definition of a relative stable map (the markings never map to the relative divisor). The class $$\text{ev}_S^*(\Delta_{\mathsf{rel}}) \in H^*(\overline{M}_{g,r}'(X/D,\beta)_\mu)$$ plays a crucial role in the relative descendent correspondence.
By forgetting the relative structure, we obtain a projection $$\pi:(X/D)^s \rightarrow X^s\ .$$ The product contains the standard diagonal $\Delta\subset X^s$. However, $$\pi^*(\Delta) \neq \Delta_{\mathsf{rel}}\ .$$ The former has more components in the relative boundary if $D\neq \emptyset$.
\subsubsection{Relative descendent correspondence} \label{pwwf} Let $\widehat{\alpha}$ be a partition of length $\widehat{\ell}$. Let $\Delta_{\mathsf{rel}}$ be the cohomology class of the small diagonal in $(X/D)^{\widehat{\ell}}$. For a cohomology class $\gamma$ of $X$, let $$\gamma\cdot \Delta_{\mathsf{rel}} \in H^*\big((X/D)^{\widehat{\ell}}\, \big)\, ,$$ where $\Delta_{\mathsf{ref}}$ is the small diagonal of Section \ref{diagclas}. Define the relative descendent insertion $\tau_{\widehat{\alpha}}(\gamma)$ by \begin{equation}\label{j9994} \tau_{\widehat{\alpha}}(\gamma)= \psi_1^{\widehat{\alpha}_1-1} \cdots \psi_{\hat{\ell}}^{\widehat{\alpha}_{\hat{\ell}}-1} \cdot \text{ev}^*_{1,\ldots,\hat{\ell}} ( \gamma\cdot \Delta_{\mathsf{rel}}) \ . \end{equation} In case, $D=\emptyset$, definition \eqref{j9994} specializes to \eqref{j77833}.
Let $\Omega_X[D]$ denote the locally free sheaf of differentials with logarithmic poles along $D$. Let $$T_{X}[-D] = \Omega_{X}[D]^{\ \vee}$$ denote the dual sheaf of tangent fields with logarithmic zeros.
For the relative geometry $X/D$, the coefficients of the correspondence matrix $\widetilde{\mathsf{K}}$ act on the cohomology of $X$ via the substitution $$c_i= c_i(T_{X}[-D])$$ instead of the substitution $c_i=c_i(T_X)$ used in the absolute case. Then, we define \begin{equation} \label{gtte4} \overline{\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})} = \sum_{P \text{ set partition of }\{1,\ldots,l\}}\ \prod_{S\in P}\ \sum_{\widehat{\alpha}}\tau_{\widehat{\alpha}}(\widetilde{\mathsf{K}}_{\alpha_S,\widehat{\alpha}}\cdot\gamma_S) \ \end{equation} as before via \eqref{j9994} instead of \eqref{j77833}. Definition \eqref{gtte4} is for even classes $\gamma_i$. In the presence of odd $\gamma_i$, a sign has to be included exactly as in the absolute case.
\begin{Conjecture} \label{ttt444} For $\gamma_i \in H^{*}(X)$, we have \begin{multline*}
(-q)^{-d_\beta/2}\ZZ_{\mathsf{P}}\Big(X/D;q\ \Big| {\tau_{\alpha_1-1}(\gamma_1)\cdots
\tau_{\alpha_{\ell}-1}(\gamma_{\ell})} \ \Big| \ \mu \Big)_\beta \\ =
(-iu)^{d_\beta+\ell(\mu)-|\mu|}\ZZ'_{\mathsf{GW}}\Big(X/D;u\ \Big| \ \overline{\tau_{a_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})}
\ \Big| \ \mu\Big)_\beta \end{multline*} under the variable change $-q=e^{iu}$. \end{Conjecture}
The change of variables is well-defined by the rationality of Conjecture \ref{222}. A case in which Conjecture \ref{ttt444} is proven is when $X$ is a nonsingular projective toric 3-fold and $D\subset X$ is a toric divisor. The rationality of the stable pairs series is given by Theorem \ref{ppp12}. The following result can be obtained by the methods of \cite{PPQ}.
\begin{Theorem} For $X/D$ a nonsingular projective relative toric 3-fold,
the descendent partition function For $\gamma_i \in H^{*}(X)$, we have \begin{multline*}
(-q)^{-d_\beta/2}\ZZ_{\mathsf{P}}\Big(X/D;q\ \Big| {\tau_{\alpha_1-1}(\gamma_1)\cdots
\tau_{\alpha_{\ell}-1}(\gamma_{\ell})} \ \Big| \ \mu \Big)_\beta \\ =
(-iu)^{d_\beta+\ell(\mu)-|\mu|}\ZZ'_{\mathsf{GW}}\Big(X/D;u\ \Big| \ \overline{\tau_{a_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})}
\ \Big| \ \mu\Big)_\beta \end{multline*} under the variable change $-q=e^{iu}$. \end{Theorem}
Conjecture \ref{ttt444} can be lifted in a canonical way to the equivariant relative case (as in the the rationality of Conjecture \ref{333}). Some equivariant relative results are proven in \cite{PPQ}.
\subsection{Complete intersections} Let $X$ be a Fano or Calabi-Yau complete intersection of ample divisors in a product of projective spaces, $$ X \subset {\mathsf{P}}^{n_1} \times \cdots \times {\mathsf{P}}^{n_m}\ .$$ A central result of \cite{PPQ} is the proof of the descendent correspondence for even classes.
\begin{Theorem} [P.-Pixton, 2012] \label{qqq111} Let $X$ be a nonsingular Fano or Calabi-Yau complete intersection 3-fold in a product of projective spaces. For even classes $\gamma_i \in H^{2*}(X)$, we have \begin{multline*}
(-q)^{-d_\beta/2}\ZZ_{\mathsf{P}}\Big(X;q\ \Big| {\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})} \Big)_\beta \\ =
(-iu)^{d_\beta}\ZZ'_{\mathsf{GW}}\Big(X;u\ \Big| \ \overline{\tau_{\alpha_1-1}(\gamma_1)\cdots \tau_{\alpha_{\ell}-1}(\gamma_{\ell})}\ \Big)_\beta \end{multline*} under the variable change $-q=e^{iu}$. \end{Theorem}
Theorem \ref{qqq111} relies on the rationality of the stable pairs series of Theorem \ref{qqq111f}. For $\gamma_i \in H^{2*}(X)$ even classes of {\em positive} degree, we obtain from Theorem \ref{qqq111} (under the same complete intersection hypothesis for $X$) the following result where only the leading term of the correspondence contributes: \begin{multline*} (-q)^{-d_\beta/2}\ \mathsf{Z}_{\mathsf{P}}\left(X;q \
\Bigg| \ \prod_{i=1}^r {\tau}_0(\gamma_{i})
\prod_{j=1}^s {\tau}_{k_j}(\mathsf{p}) \right)_{\beta}=\\ (-iu)^{d_\beta} (iu)^{-\sum k_j}\
\mathsf{Z}'_{\mathsf{GW}}\left(X;u \ \Bigg| \ \prod_{i=1}^r \tau_0(\gamma_{i}) \prod_{j=1}^s {\tau}_{k_j}(\mathsf{p}) \right)_{\beta} \ \end{multline*} under the variable change $-q=e^{iu}$.
If we specialize Theorem \ref{qqq111} further to the case where there are no descendent insertions, we obtain $$ \ZZ_{\mathsf{P}}\Big(X;q\Big)_\beta = \ZZ'_{\mathsf{GW}}\Big(X;u\Big)_\beta $$ under the variable change $-q=e^{iu}$ for Calabi-Yau complete intersections in a product of projective spaces. In particular, the Gromov-Witten/Pairs correspondence hold for the famous quintic Calabi-Yau 3-fold $$X_5 \subset \mathbf{P}^4\, .$$
\subsection{$K3$ fibrations}
Let $Y$ be a nonsingular projective toric 3-fold for which the anticanonical class $K_Y^*$ is base point free and the generic anticanonical divisor is a nonsingular projective $K3$ surface $S$. Let \begin{equation}\label{xhxhxh} X \subset Y \times \mathsf{P}^1 \end{equation} be a nonsingular hypersurface in the class $K^*_Y \otimes K^*_{\mathsf{P}^1}$. Using the degeneration $$ X \leadsto Y\, \cup\, S\times \mathsf{P}^1\, \cup\, Y$$ obtained by factoring a divisor of $K^*_Y \otimes K^*_{\mathsf{P}^1}$, the results of \cite{PPQ} yield the Gromov-Witten/Pairs correspondence for the Calabi-Yau 3-fold $X$.{\footnote{The strategy here is simpler than presented in Appendix B of \cite{rp13} for a particular toric 4-fold $Y$.}}
The hypersurface $X$ defined by \eqref{xhxhxh} is a $K3$-fibered Calabi-Yau 3-fold. A very natural question to ask is whether the Gromov-Witten/Pairs correspondence can be proven for all $K3$-fibered 3-folds. While the general case is open, results for the correspondence in fiber classes can be found in \cite{rp13}.{\footnote{Parallel questions can be pursued for other surfaces. For results for surfaces of general type (involving the stable pairs theory of descendents), see \cite{KoolT}.}}
\section{Virasoro constraints} \label{333r}
\subsection{Overview} Descendent partition functions in Gromov-Witten theory are conjectured to satisfy Virasoro constraints \cite{EHX} for every target variety $X$. Via the Gromov-Witten/Pairs descendent correspondence, we expect parallel constraints for the descendent theory of stable pairs. An ideal path to finding the constraints for stable pairs would be to start with the explicit Virasoro constraints in Gromov-Witten theory and then apply the correspondence. However, our knowledge of the correspondence matrix is not yet sufficient for such an application.
Another method is to look experimentally for relations which are of the expected shape. In a search conducted almost 10 years ago with A. Oblomkov and A. Okounkov, we found a set of such relations for the theory of ideal sheaves \cite{oop} for every nonsingular projective 3-fold $X$. As an example, the equations for $\mathsf{P}^3$ are presented here for stable pairs.{\footnote{Since \cite{oop} is written for ideal sheaves, a DT/PT correspondence for descendents is needed to move the relations to the theory of stable pairs. Such a correspondence is also studied in \cite{oop}. I am very grateful to A. Oblomkov for his help with the formulas here.}}
\subsection{First equations}\label{firsteq} Let $X$ be a nonsingular projective 3-fold. The descendent insertions $$\tau_0(1)\, , \ \ \ \tau_0(D)\, \ \text{for $D\in H^2(X)$}, \ \ \ \tau_1(1)$$ all satisfy simple equations (parallel to the string, divisor, and dilation equations in Gromov-Witten theory): \begin{enumerate} \item[(i)]
$\ZZ_{P}\Big(X;q\ \Big|\, \tau_0(1)\cdot \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)_\beta = 0$, \item[(ii)]
$\ZZ_{P}\Big(X;q\ \Big|\, \tau_0(D)\cdot \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)_\beta =
\left( \int_\beta D\right) \, \ZZ_{P}\Big(X;q\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)_\beta \, $,
\item[(iii)] $\ZZ_{P}\Big(X;q\ \Big|\, \tau_1(1)\cdot \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)_\beta =
\left(q\frac{d}{dq} - \frac{d_\beta}{2} \right) \, \ZZ_{P}\Big(X;q\ \Big| \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)_\beta $ . \end{enumerate} All three are obtained directly from the definition of the descendent action given
in Section \ref{actact}. To prove (iii), the Hirzebruch-Riemann-Roch equation $$\text{ch}_3(F)= n - \frac{d_\beta}{2}$$ is used for a stable pair $$[F,s] \in P_n(X,\beta)\, , \ \ \ d_\beta= \int_\beta c_1(X)\, .$$
The compatibility of (i) and (ii) with the functional equation of Conjecture \ref{444} is trivial. While not as obvious, the differential operator $$q\frac{d}{dq} - \frac{d_\beta}{2}$$ is also beautifully consistent with Conjecture \ref{444}. We can easily prove using (iii) that Conjecture \ref{444} holds for
$$\ZZ_{P}\Big(X;q\ \Big|\, \tau_1(1)\cdot \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)_\beta$$ if and only if Conjecture \ref{444} holds for
$$\ZZ_{P}\Big(X;q\ \Big|\, \prod_{i=1}^r \tau_{k_i}(\gamma_i) \Big)_\beta\, .$$ For example, equation (iii) yields
$$\ZZ_{\mathsf{P}}\big({\mathsf{P}}^3;q\ |\, \tau_1(1) \tau_5(\mathsf{1}) \big )_{\mathsf{L}} = \frac{q+4q^2+17q^3-62q^4+17q^5+4q^6+q^7}{9(1+q)^4}\, $$ when applied to \eqref{s555}.
\subsection{Operators and constraints} \label{sec:virasoro-constraints} A basis of the cohomology $H^*(\mathsf{P}^3)$ is given by $$\mathsf{1}\, ,\ \mathsf{H}\, ,\ \mathsf{L}=\mathsf{H}^2\, ,\ \mathsf{p}=\mathsf{H}^3$$ where $\mathsf{H}$ is the hyperplane class. The divisor and dilaton equations here are \begin{eqnarray*}
\ZZ_{P}\Big(\mathsf{P}^3;q\ \Big|\, \tau_0(\mathsf{H})\cdot \mathsf{D}) \Big)_{d\mathsf{L}} &=&
d \ZZ_{P}\Big(\mathsf{P}^3;q\ \Big|\, \mathsf{D}\Big)_{d\mathsf{L}} \, , \\
\ZZ_{P}\Big(\mathsf{P}^3;q\ \Big|\, \tau_1(1)\cdot \mathsf{D}\Big)_{d\mathsf{L}} &=&
\left(q\frac{d}{dq} - 2{d} \right) \, \ZZ_{P}\Big(\mathsf{P}^3;q\ \Big|\, \mathsf{D}\Big)_{d\mathsf{L}} \, , \end{eqnarray*} where $\mathsf{D}= \prod_{i=1}^r \tau_{k_i}(\gamma_i)$ is an arbitrary descendent insertion.
Before presenting the formulas, we introduce two conventions which simplify the notation. The first concerns descendents with negative subscripts. We define the descendent action in two negative cases: \begin{equation}\label{jww9} \tau_{-2}(\mathsf{H}^j)=-\delta_{j,3}\,,\quad \tau_{-1}(\gamma)=0\, . \end{equation} In particular, these all vanish except for $\tau_{-2}(\mathsf{p})= -1$. Convention \eqref{jww9} is consistent with Definition \ref{dact} via the replacement $$\text{ch}_{2+i}(\mathbb{F}) \mapsto \text{ch}_{2+i}(\mathbb{I}[1]^\bullet)\, \, ,$$ where $\mathbb{I}^\bullet$ is the universal stable pair on $X \times P_n(X,\beta)$.
For the Virasoro constraints, the formulas are more naturally stated in terms of the Chern character subscripts (instead of including the shift by 2 in Definition \ref{dact}). As a second convention, we define the insertions $\mathsf{ch}_i(\gamma)$ by \begin{equation}\label{kqq2} \mathsf{ch}_i(\gamma)=\tau_{i-2}(\gamma) \end{equation} for all $i\geq 0$. In particular, $\mathsf{ch}_0(\mathsf{p})$ acts as $-1$ and $\mathsf{ch}_1(\mathsf{H}^j)$ acts as 0.
Let $\mathbb{D}^+$ be a $\mathbb{Q}$-polynomial ring with generators
$$\Big\{\, \mathsf{ch}_i(\mathsf{H}^j)\, \Big| \ i\ge 0\, ,\ \ j=0,1,2,3\, \Big\}\, .$$ Via equation \eqref{kqq2}, we view $\mathbb{D}^+$ as an extension $$\mathbb{D} \subset \mathbb{D}^+\, $$ of the algebra of descendents defined in Section \ref{actactt}. We define $$\mathsf{ch}_a\mathsf{ch}_b(\mathsf{H}^j) \in \mathbb{D}^+$$ in terms of the generators by
$$\mathsf{ch}_a\mathsf{ch}_b(\mathsf{H}^j) = \sum_{r,s} \mathsf{ch}_a(\gamma^L_r) \mathsf{ch}_b(\gamma^R_s)$$ where the sum is indexed by the K\"unneth decomposition $$\mathsf{H}^j\cdot \Delta = \sum_{r,s} \gamma^L_r \otimes \gamma^R_s \in H^*(\mathsf{P}^3 \times \mathsf{P}^3)\, $$ and $\Delta \subset \mathsf{P}^3 \times \mathsf{P}^3$ is the diagonal. Both $\mathsf{ch}_i(\mathsf{H}^j)$ and $\mathsf{ch}_a\mathsf{ch}_b(\mathsf{H}^j)$ define operators on $\mathbb{D}^+$ by multiplication.
To write the Virasoro relations, we will define
derivations $$\mathrm{R}_k: \mathbb{D}^+ \rightarrow \mathbb{D}^+$$ for $k \geq -1$ by the following
action on the generators of $\mathbb{D}^+$, $$\mathrm{R}_k\left(\mathsf{ch}_i(\mathsf{H}^j)\right)=
\left(\, \prod_{n=0}^{k} (i+j-3+n)\, \right)\, \mathsf{ch}_{k+i}(\mathsf{H}^j)\, .$$ In case $k=-1$, the product on the right is empty and $$\mathrm{R}_{-1}\left(\mathsf{ch}_i(\mathsf{H}^j)\right)=
\mathsf{ch}_{i-1}(\mathsf{H}^j)\, .$$
\begin{definition} Let $\mathcal{L}_k:\mathbb{D}^+\rightarrow \mathbb{D}^+$ for $k\geq -1$ be the operator \begin{eqnarray*}
\mathcal{L}_k&=&-2\sum_{a+b=k+2}(-1)^{d^L d^R}(a+d^L-3)!(b+d^R-3)!\, \mathsf{ch}_a\mathsf{ch}_b(\mathsf{H})\\ & & + \sum_{a+b=k}a!b!\,\mathsf{ch}_a\mathsf{ch}_b(\mathsf{p})\\& & +\, \mathrm{R}_k+(k+1)!\, \mathrm{R}_{-1}\mathsf{ch}_{k+1}(\mathsf{p})\, . \end{eqnarray*} \end{definition}
The first term in the formula for $\mathcal{L}_k$ requires explanation. By definition, \begin{equation}\label{ppqq22} \mathsf{ch}_a\mathsf{ch}_b(\mathsf{H}) = \mathsf{ch}_a(\mathsf{p}) \mathsf{ch}_b(\mathsf{H}) +\mathsf{ch}_a(\mathsf{L}) \mathsf{ch}_b(\mathsf{L}) + \mathsf{ch}_a(\mathsf{H}) \mathsf{ch}_b(\mathsf{p}) \end{equation} via the three terms of the K\"unneth decomposition of $\mathsf{H}\cdot\Delta$. The notation $$(-1)^{d^L d^R}(a+d^L-3)!(b+d^R-3)!\, \mathsf{ch}_a\mathsf{ch}_b(\mathsf{H})\, $$ is shorthand for the sum \begin{eqnarray*} & & (-1)^{3\cdot 1}(a+3-3)!(b+1-3)!\, \mathsf{ch}_a(\mathsf{p}) \mathsf{ch}_b(\mathsf{H})\\ &+& (-1)^{2\cdot 2}(a+2-3)!(b+2-3)!\, \mathsf{ch}_a(\mathsf{L}) \mathsf{ch}_b(\mathsf{L})\\ &+& (-1)^{1\cdot 3} (a+1-3)!(b+3-3)!\, \mathsf{ch}_a(\mathsf{H}) \mathsf{ch}_b(\mathsf{p})\, . \end{eqnarray*} The three summands of \eqref{ppqq22} are each weighted by the factor $$(-1)^{d^L d^R}(a+d^L-3)!(b+d^R-3)!$$ where $d^L$ is the (complex) degree of $\gamma^L$ and $d^R$ is the (complex) degree of $\gamma^R$ with respect to the K\"unneth summand $\gamma^L\otimes \gamma^R$.
In the second term of the formula, $a!b!\, \mathsf{ch}_a \mathsf{ch}_b(\mathsf{p})$ can be expanded as $$a!b!\, \mathsf{ch}_a\mathsf{ch}_b(\mathsf{p}) =a!b!\, \mathsf{ch}_a(\mathsf{p}) \mathsf{ch}_b(\mathsf{p})\, .$$ The summations over $a$ and $b$ in the first two terms in the formula for $\mathcal{L}_k$ require $a\geq0$ and $b\geq 0$. All factorials with negative arguments vanish.
For example, the formula for the first operator $\mathcal{L}_{-1}$ is \begin{eqnarray*} \mathcal{L}_{-1}&=& \mathsf{R}_{-1} + 0! \, \mathsf{R}_{-1} \mathsf{ch}_0(\mathsf{p}) \, .
\end{eqnarray*} For $\mathcal{L}_{0}$, we have \begin{eqnarray*} \mathcal{L}_{0}&=& -2\cdot(-1)^{3\cdot 1}(0+3-3)!(2+1-3)!\, \mathsf{ch}_0(\mathsf{p})\mathsf{ch}_2(\mathsf{H})\\
& & -2\cdot(-1)^{2\cdot 2}(1+2-3)!(1+2-3)!\, \mathsf{ch}_1(\mathsf{L})\mathsf{ch}_1(\mathsf{L})\\
& & -2\cdot(-1)^{1\cdot 3}(2+1-3)!(0+3-3)!\, \mathsf{ch}_2(\mathsf{H})\mathsf{ch}_0(\mathsf{p})\\ & & +\mathsf{ch}_0(\mathsf{p})\mathsf{ch}_0(\mathsf{p})\\ && +\mathsf{R}_{0} + \mathsf{R}_{-1} \mathsf{ch}_1(\mathsf{p})\, . \end{eqnarray*} After simplification, we obtain $$\mathcal{L}_0= 4 \mathsf{ch}_0(\mathsf{p})\mathsf{ch}_2(\mathsf{H})-2 \mathsf{ch}_1(\mathsf{L})\mathsf{ch}_1(\mathsf{L}) +\mathsf{ch}_0(\mathsf{p})\mathsf{ch}_0(\mathsf{p})
+\mathsf{R}_{0} + \mathsf{R}_{-1} \mathsf{ch}_1(\mathsf{p})\, . $$ The operators $\mathcal{L}_k$ on $\mathbb{D}^+$ are conjectured to be the analogs for stable pairs of the Virasoro constraints for the Gromov-Witten theory of $\mathsf{P}^3$.
\begin{Conjecture}[Oblomkov-Okounkov-P.] \label{fpp55} We have
$$\mathsf{Z}_{\mathsf{P}}(\mathsf{P}^3;q \ | \, \mathcal{L}_k \, \mathsf{D})_{d\mathsf{L}}=0$$ for all $k\geq -1$, for all $\mathsf{D}\in\mathbb{D}^+$, and
for all curve classes $d\mathsf{L}$. \end{Conjecture}
For example, for $k=-1$, Conjecture \ref{fpp55} states
$$\mathsf{Z}_{\mathsf{P}}(\mathsf{P}^3;q \ | \,\mathcal{L}_{-1} \mathsf{D})_{d\mathsf{L}}=0\, .$$ By the above calculation of $\mathcal{L}_{-1}$, \begin{eqnarray*}
\mathsf{Z}_{\mathsf{P}}(\mathsf{P}^3;q \ | \,\mathcal{L}_{-1} \mathsf{D})_{d\mathsf{L}} & =&
\mathsf{Z}_{\mathsf{P}}\Big(\mathsf{P}^3;q \ \Big| \,(\mathsf{R}_{-1} + 0! \, \mathsf{R}_{-1} \mathsf{ch}_0(\mathsf{p}))\, \mathsf{D} \Big)_{d\mathsf{L}}\\ & = & \mathsf{Z}_{\mathsf{P}}\Big(
\mathsf{P}^3;q \ \Big| \, (\mathsf{R}_{-1} - \mathsf{R}_{-1})\, \mathsf{D} \Big)_{d\mathsf{L}}\\ & = & 0\, , \end{eqnarray*} where we have also used the descendent action $\mathsf{ch}_0(\mathsf{p})=-1$. The claim
$$\mathsf{Z}_{\mathsf{P}}(\mathsf{P}^3;q \ | \, \mathcal{L}_{0} \mathsf{D})_{d\mathsf{L}}=0\, .$$ is easily reduced to the divisor equation (ii) of Section \ref{firsteq} and is also true.
The first nontrivial assertion of Conjecture \ref{fpp55} occurs for $k=1$,
$$\mathsf{Z}_{\mathsf{P}}(\mathsf{P}^3;q \ | \,\mathcal{L}_{1} \mathsf{D})_{d\mathsf{L}} =
\mathsf{Z}_{\mathsf{P}}\Big(\mathsf{P}^3;q \ \Big| \,\big(-4\mathsf{ch}_3(\mathsf{H}) + \mathsf{R}_1 + 2\mathsf{ch}_2(\mathsf{p}) \mathsf{R}_{-1}\big)\, \mathsf{D} \Big)_{d\mathsf{L}}=0\, , $$ which is at the moment unproven. For example, let $\mathsf{D}= \mathsf{ch}_3(\mathsf{p})$ and $d=1$. We obtain a prediction for descendent series for $\mathsf{P}^3$, $$
-4\mathsf{Z}_{\mathsf{P}}(\mathsf{ch}_3(\mathsf{H}) \mathsf{ch}_3(\mathsf{p}) )_{\mathsf{L}} +12\mathsf{Z}_{\mathsf{P}}(\mathsf{ch}_4(\mathsf{p}))_{\mathsf{L}} +2\mathsf{Z}_{\mathsf{P}}(\mathsf{ch}_2(\mathsf{p}) \mathsf{ch}_2(\mathsf{p}) )_{\mathsf{L}} =0\, ,$$ which can be checked using the evaluations \begin{align*} \mathsf{Z}_{\mathsf{P}}(\mathsf{ch}_3(\mathsf{H}) \mathsf{ch}_3(\mathsf{p}) )_{\mathsf{L}} & =& \mathsf{Z}_{\mathsf{P}}(\tau_1(\mathsf{H}) \tau_1(\mathsf{p}) )_{\mathsf{L}} &=& \frac{3}{4}q - \frac{3}{2}q^2 +\frac{3}{4}q^3\, ,\\ \mathsf{Z}_{\mathsf{P}}(\mathsf{ch}_4(\mathsf{p}))_{\mathsf{L}} & =& \mathsf{Z}_{\mathsf{P}}(\tau_2(\mathsf{p}))_{\mathsf{L}} &=&
\frac{1}{12}q -\frac{5}{6}q^2 +\frac{1}{12}q^3\, , \\ \mathsf{Z}_{\mathsf{P}}(\mathsf{ch}_2(\mathsf{p}) \mathsf{ch}_2(\mathsf{p}) )_{\mathsf{L}} & = & \mathsf{Z}_{\mathsf{P}}(\tau_0(\mathsf{p}) \tau_0(\mathsf{p}) )_{\mathsf{L}} &=&
q+ 2q^2+q^3 \, . \end{align*}
\subsection{The bracket} To find the Virasoro bracket, we introduce the operators \begin{eqnarray*}
L_k&=& -2\sum_{a+b=k+2}(-1)^{d^L d^R}(a+d^L-3)!(b+d^R-3)! \mathsf{ch}_a\mathsf{ch}_b(\mathsf{H})\\ & &+\sum_{a+b=k}a!b!\mathsf{ch}_a\mathsf{ch}_b(\mathsf{p})\\ & &+ \mathsf{R}_k\, . \end{eqnarray*}
We then obtain the Virasoro relations and the bracket with $\mathsf{ch}_k(\mathsf{p})$, $$[L_k,L_m]=(m-k)L_{k+m},\quad [L_n,k!\mathsf{ch}_k(\mathsf{p})]=k\cdot (k+n)!\mathsf{ch}_{n+k}(\mathsf{p}).$$ The operators $\mathcal{L}_k$ are expressed in terms of $L_k$ by: $$\mathcal{L}_k=L_k+(k+1)!L_{-1}\mathsf{ch}_{k+1}(\mathsf{p}).$$
\section{Virtual class in algebraic cobordism} \label{4448} \subsection{Overview} Let $X$ be nonsingular projective 3-fold. From the work of J. Shen \cite{Shen}, the virtual fundamental class of the moduli space of stable pairs $$[P_n(X,\beta)]^{vir} \in A_{d_\beta}(P_n(X,\beta))$$ admits a canonical lift to the theory of algebraic cobordism{\footnote{We not do review the foundations of the theory of algebraic cobordism here. The reader can find discussions in \cite{LMo, LPa}. As for cohomology, we always take $\mathbb{Q}$-coefficients. Shen constructs a canonical lift to algebraic cobordism $[M]^{vir} \in \Omega_*(M)$ of the virtual class in Chow $[M]^{vir} \in A_*(M)$ obtained from a 2-term perfect obstruction theory on a quasi-projective scheme $M$. }} \begin{equation}\label{f99f} [P_n(X,\beta)]^{vir} \in \Omega_{d_\beta}(P_n(X,\beta))\, \end{equation} where $d_\beta=\int_\beta c_1(X)$. Shen's construction depends only upon the 2-term perfect obstruction theory of $P_n(X,\beta)$ and is closely related to earlier work of Ciocan-Fontantine and Kapranov \cite{CFK} and Lowrey-Sch\"urg \cite{LS}.
The lift \eqref{f99f} leads to several natural questions. The simplest is {\em how does the virtual class in algebraic cobordism vary with $n$?} Let $$\pi: P_n(X,\beta) \rightarrow \bullet$$ be the structure map to the point $\bullet$. Then, for fixed $\beta$, we define $$\mathsf{Z}^{\Omega}_{\mathsf{P}}(X;q)_\beta= \sum_{n\in \mathbb{Z}}q^n\, \pi_*[P_n(X,\beta)]^{vir} \ \in \Omega_{d_\beta}(\bullet) \otimes_{\mathbb{Q}} \mathbb{Q}((q))\, .$$ Is there an analogue for $\mathsf{Z}^{\Omega}_{\mathsf{P}}(X;q)_\beta$ of the rationality and functional equation for the descendent theory of the standard virtual class?
\subsection{Chern numbers} While the full data of the cobordism class \eqref{f99f} is difficult to analyze, the push-forward $$\pi_*[P_n(X,\beta)]^{vir} \in \Omega_{d_\beta}(\bullet)$$ is characterized by the virtual Chern numbers of $P_n(X,\beta)$.
Since $P_n(X,\beta)$ has a 2-term perfect obstruction theory, there is a virtual tangent complex $\mathsf{T}^{vir} \in D^b(P_n(X,\beta))$ with Chern classes $$c_i(\mathsf{T}^{vir})\in H^{2i}( P_n(X,\beta))\, .$$ For every partition of the virtual dimension $d_\beta$, $$\sigma=(s_1,\ldots, s_\ell)\, , \ \ \ \ \ d_\beta=\sum_{i=1}^\ell s_i\, ,$$ we define an associated Chern number $$c^\sigma_{n,\beta} = \int_{[P_n(X,\beta)]^{vir}} \prod_{i=1}^\ell c_{s_i}(\mathsf{T}^{vir})\ \in \mathbb{Z}$$ by integration against the standard virtual class $$[P_n(X,\beta)]^{vir} \in H_{2d_\beta}(P_n(X,\beta))\, .$$ The complete collection of Chern numbers
$$\big\{\, c^\sigma_n \, \big| \, \sigma \in \text{Partitions}(d_\beta)\, \big\}$$ uniquely determines the algebraic cobordism class $$\pi_*[P_n(X,\beta)]^{vir} \in \Omega_{d_\beta}(\bullet)\, .$$
\subsection{Rationality and the functional equation}
The rationality of the partition function $\mathsf{Z}^{\Omega}_{\mathsf{P}}(X;q)_\beta$ is equivalent to the rationality of {\em all} the functions $$\mathsf{Z}^\sigma_{\mathsf{P}}(X;q)_\beta = \sum_{n\in \mathbb{Z}} c^\sigma_{n,\beta} q^n$$ for $\sigma \in \text{Partitions}(d_\beta)$.
\begin{Theorem}[Shen 2014]\label{ll22} The Chern class $c_i(\mathsf{T}^{vir})\in H^{2i}( P_n(X,\beta))$ can be written as a $\mathbb{Q}$-linear combination of products of descendent classes
$$\left\{ \, \prod_{i=1}^r \tau_{k_i}(\gamma_i) \ \Big| \ \sum_{i=1}^rk_i \equiv 0 \, \text{\em mod 2}\, , \, \gamma_i\in H^*(X)\, \right\}$$ by a formula which is independent of $n$ and $\beta$. \end{Theorem}
Shen's proof is geometric and constructive. Following the notation of Section \ref{actact}, let $$\pi_P: X \times P_n(X,\beta) \rightarrow P_n(X,\beta)$$ be the projection and let $\mathbb{I}^\bullet\in D^b( X \times P_n(X,\beta) )$ be the universal stable pair. The class of the virtual tangent complex in $K^0(P_n(X,\beta))$ is \begin{eqnarray*} [-\mathsf{T}^{vir}] &=& [R\pi_{P*} R\mathcal{H}om(\mathbb{I}^\bullet,\mathbb{I}^\bullet)_0] \\ & = & [R\pi_{P*} (\mathbb{I}^\bullet\otimes^L (\mathbb{I}^\bullet))^\vee] - [R\pi_{P*} \mathcal{O}_{X \times P_n(X,\beta)}]\, . \end{eqnarray*} The Chern character of $-\mathsf{T}^{vir}$ is then computed by the Grothendieck-Riemman-Roch formula, \begin{equation}\label{nana4} \text{ch}[-\mathsf{T}^{vir}] = \pi_{P*}\Big(\text{ch}(\mathbb{I}^\bullet) \cdot \text{ch}((\mathbb{I}^\bullet)^\vee) \cdot \text{Td}(X)\Big) - \pi_{P*}\Big( \text{Td(X)}\Big)\, . \end{equation} The second term of\eqref{nana4} is just $\int_X \text{Td}_3(X)$ times the identity $1\in H^0(P_n(X,\beta))$.
More interesting is the first term of \eqref{nana4} which can be written as \begin{equation}\label{jj33} \epsilon_{*}\Big(\text{ch}(\mathbb{I}^\bullet) \cdot \text{ch}((\widetilde{\mathbb{I}}^\bullet)^\vee) \cdot \Delta \cdot \text{Td}(X)\Big) \end{equation} where $\epsilon$ is the projection $$\epsilon: X \times X \times P_{n}(X,\beta)\rightarrow P_n(X,\beta)\, ,$$ $\mathbb{I}^\bullet$ and $\widetilde{\mathbb{I}}^\bullet$ are the universal stable pairs pulled-back via the first and second projections $$ X\times P_n(X,\beta)\, \leftarrow\, X \times X \times P_{n}(X,\beta) \, \rightarrow\, X\times P_n(X,\beta)$$ respectively, and $\Delta$ is the pull-back of the diagonal in $X\times X$. Using the K\"unneth decomposition of $\Delta$, Shen easily writes \eqref{jj33} as a quadratic expression in the descendent classes --- see \cite[Section 3.1]{Shen}. The answer is a universal formula independent of $n$ and $\beta$.
Though not explicitly remarked (nor needed) in \cite{Shen}, Shen's universal formula for $\text{ch}[-\mathsf{T}^{vir}]$ is a $\mathbb{Q}$-linear combination of classes
$$\left\{ \, \tau_{k_1}(\gamma_1)\tau_{k_2}(\gamma_2)\, \Big| \, k_1+k_2\equiv 0 \, \text{ mod 2}\, , \ \gamma_1,\gamma_2 \in H^*(X) \, \right\} $$ since each quadratic term appears in \eqref{jj33} in a form proportional to $$ ((-1)^{k_1}+(-1)^{k_2})\cdot \tau_{k_1}(\gamma_1)\tau_{k_2}(\gamma_2)$$ because of the universal stable pair $\text{ch}(\mathbb{I}^\bullet)$ appears together with the dual $\text{ch}((\widetilde{\mathbb{I}}^\bullet)^\vee)$.
There are two immediate consequences of Theorem \ref{ll22}. If the rationality of descendent series of Conjecture \ref{111} holds for $X$, then $${\text{\em $\mathsf{Z}^\Omega_{\mathsf{P}}(X;q)_\beta$ is the Laurent expansion of a rational function in $\Omega_{d_\beta}(\bullet)\otimes_{\mathbb{Q}}\mathbb{Q}(q)$}}\, .$$ In particular, Shen's results yield the rationality of the partition functions in algebraic cobordism in case $X$ is a nonsingular projective toric variety (where rationality of the descendent series is proven).
The second consequence concerns the functional equation. The descendents which arise in Theorem \ref{ll22} have {\em even} subscript sum. Hence, if the functional equation of Conjecture \ref{444} holds for $X$, then \begin{equation}\label{gtt99} \mathsf{Z}^\Omega_{\mathsf{P}}\left(X;\frac{1}{q}\right)_\beta = q^{-d_\beta} \mathsf{Z}^\Omega_{\mathsf{P}}(X;q)_\beta\,. \end{equation} The functional equation \eqref{gtt99} should be regarded as the correct generalization to all $X$ of the symmetry \begin{equation*} \mathsf{Z}_{\mathsf{P}}\left(Y;\frac{1}{q}\right)_\beta =
\mathsf{Z}_{\mathsf{P}}(Y;q)_\beta\, \end{equation*} of stable pairs invariants for {\em Calabi-Yau} 3-folds $Y$.
\subsection{An example} A geometric basis of $\Omega_*(\bullet)$ is given by the classes of products of projective spaces. As an example, we write the series $$\mathsf{Z}^\Omega_{\mathsf{P}}(\mathsf{P}^3;q)_{\mathsf{L}} \in \Omega_4(\bullet) \otimes_{\mathbb{Q}} \mathbb{Q}(q)$$ in terms of products of projective spaces: \begin{eqnarray*} \mathsf{Z}^\Omega_{\mathsf{P}}(\mathsf{P}^3;q)_{\mathsf{L}}&=&
\hspace{9pt} [\mathbb{P}^4]\cdot f_4(q)\\ && + [\mathbb{P}^3 \times \mathbb{P}^1]\cdot f_{31}(q) \\ && +[\mathbb{P}^2 \times \mathbb{P}^2]\cdot f_{22}(q)\\
&& +[\mathbb{P}^2 \times \mathbb{P}^1 \times \mathbb{P}^1] \cdot f_{211}(q)\\ && + [\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbb{P}^1 \times \mathbb{P}^1]\cdot f_{1111}(q) \, , \end{eqnarray*} where the rational functions{\footnote{I am very grateful to J. Shen for providing these formulas.}} are given by
{\footnotesize{ \begin{eqnarray*} f_4(q)&=& -4q -40q^2-4q^3\,, \\ f_{31}(q)&=& \frac{q}{(1+q)^4}\left( \frac{21}{2} +139q+ \frac{823}{2}{q^2}+446q^3+ \frac{823}{2}{q^4}+139q^5+ \frac{21}{2}q^6 \right)\, ,\\ f_{22}(q)&=& 6q + 60q^2 +6q^3\, , \\ f_{211}(q)&=& \frac{q}{(1+q)^4}\left( -18 -264q -774{q^2} - 816q^3 -774{q^4} - 264q^5 -18q^6 \right)\, ,\\
f_{1111}(q)&=& \frac{q}{(1+q)^6}\Big(\frac{13}{2} +115q +490 q^2 +889q^3 + 1215q^4\\ & & \hspace{130pt} +889q^5 +490q^6 + 115q^7 + \frac{13}{2}q^8\Big)\,.
\end{eqnarray*} }}
\subsection{Further directions} The study of the virtual class in algebraic cobordism of the moduli space of stable pairs $P_n(X,\beta)$ is intimately connected with the study of descendents invariants. The basic reason is because the Chern classes of the virtual tangent complex are {\em tautological classes} of $P_n(X,\beta)$ in the sense of
Section \ref{actactt}.
If another approach to the virtual class in algebraic cobordism class could be found, perhaps the implications could be reversed and results about descendent series could be proven.
\noindent Departement Mathematik \\ \noindent ETH Z\"urich
\\ \noindent [email protected]
\end{document} | arXiv |
Mathematics Educators
Mathematics Educators Meta
Mathematics Educators Beta
Determining sets to show sufficiency of a condition?
$p \to q$ that means (among others)
$p$ is a sufficient condition for $q$.
To show the sufficiency, I teach my study by determining the set for $p$, the set for $q$ first and comparing their cardinal numbers. If the former has lower cardinal number then $p \to q$ is a correct proportion rather than $q \to p$.
p: I am in Tokyo, q: I am in Japan. The set for $p$ just contains a single city Tokyo but the set for $q$ contains many cities such as Tokyo, Osaka, Sapporo, etc. As the former set has small cardinal number then "I am in Tokyo." is a sufficient condition for "I am in Japan." or $p\to q$.
p: $x=2$, q: $x^2=4$. The set for $p$ just contains a single element 2 and the set for $q$ contains 2 elements (2 and -2). Therefore, "$x=2$" is a sufficient condition for "$x^2=4$" or $p\to q$.
Now consider the following
p: I am a vegetarian.
q: I don't eat pork.
The students are asked to determine the correct implication whether "$p \to q$" or "$q \to p$".
the set for p is {vegetarian}
the set for q is the set of people not eating pork = {vegetarian, Moslem, people who are allergic to pork, etc}
As the cardinal number of p is lower than q then $p\to q$ is the correct implication.
My student attempt
the set of p is the set meats the vegetarian don't eat = {pork, beef, fish, etc}
the set of q is {pork}
As the cardinal number of q is lower than p then "$q \to p$" is the correct implication.
I realize that my attempt is correct and the student's attempt is wrong.
As a teacher, how should I explain their fallacy in determining the set?
logic teaching set-theory
Money Oriented ProgrammerMoney Oriented Programmer
$\begingroup$ I don't see how this can have an answer, because I don't see how you can determine the 'right' sets in this case. The student has come up with two suitable sets and applied the rule you told them. I think the problem is that you've tried to teach them a method that can't be taught explicitly. It seems to me the only way to determine the sets is to already understand the implication. $\endgroup$ – Jessica B Jul 18 '16 at 5:45
$\begingroup$ I must confess that I find what you are trying to do very confusing, and possibly not even correct (but the ambiguities of your explanation prevent me from judging correctness). Also, I think this is making a simple idea more complex than it needs to be. $p \rightarrow q$ means that you can logically go (or "drive" if students like cars) from $p$ to $q.$ It also means that $p$ is stronger (has more information, etc.) than $q,$ and that $q$ is weaker (has less information, etc.) than $p.$ Of course, "stronger" and "weaker" here are used in their non-strict sense. $\endgroup$ – Dave L Renfro Jul 18 '16 at 20:45
$\begingroup$ One problem is that this approach seems to assume that either p => q or q => p, but of course that is typically not true. The most that can be said is that if p,q are predicates which define finite sets and if it is known that either p => q or q => p is true, then looking at the cardinalities of the corresponding finite sets can determine which. But -- this is clearly not a robust approach to teaching implication. At best, it can help explain the difference between a statement and its converse. $\endgroup$ – John Coleman Jul 21 '16 at 11:33
First, although you talk a bunch about cardinality, I don't see how that makes sense, so I'm going to assume you mean that you have them determine if the set corresponding to p is a subset of the set corresponding to q. (Otherwise, in your second example, you'd also have $x=2$ implies $x^2=9$, for instance.)
In formal terms, your method requires translating the sentences into predicates with a free variable and then comparing the sets defined by those predicates. With the math example, this is easy because there is a free variable.
With the English examples, though, it's less obvious. In the implication "I am in Tokyo"/"I am in Japan" there are really two potential variables: the underlying form is "X is in Y". The actual comparison you want is the one with a single free variable and two predicates: you want to compare "X am in Tokyo" to "X am in Japan", that is, your sets are "the set of people in Tokyo" versus "the set of people in Japan".
Instead you've chosen to work with the single predicate "I am in Y" for two different values of Y. There's no way to make that work in pure logic: the reason it works in your example is that "X am in Y" is monotone in the predicate Y. Which means that it happens to work in that example, but for a completely different reason than the method you're trying to teach.
Your students are doing exactly the same thing with "I am a vegetarian" and "I don't eat pork". They follow you in parsing this as "I don't eat {pork, chicken, ...}" and "I don't eat pork". Again, this is a two place predicate "X don't eat Y". If you think your first example is right, you have to think their parsing, of "I don't eat Y" is correct. The problem is that "X don't eat Y" is antimonotone in Y.
If you aren't going to insist that you compare on the common free variable, there's no way to make this work without getting into questions of monotonicity of predicates, which is actually (while quite interesting, and potentially accessible to children - I recently saw a Dr. Seuss themed talk on the subject by Larry Moss) rather complicated.
Henry TowsnerHenry Towsner
I would not recommend to teach this method since there are some downsides. Take
$A(x) \iff x \text{ is divisible by } 2$
$B(x) \iff x \text{ is divisible by } 42$
Is $A(x) \implies B(x)$ or $B(x) \implies A(x)$? Since there are infinitely many $x\in\mathbb Z$ fullfiling $A(x)$ and $B(x)$ this cannot be answered unless your students already know about the cardinality of infinite sets. Actually the cardinality of $\{x\in \mathbb Z: A(x)\}$ and $\{x\in \mathbb Z: B(x)\}$ is the same. Does this mean, that $A(x) \iff B(x)$?
Here is another example:
$C(x) \iff x = 23$
$D(x) \iff x = 42 \text{ or } x = 102$
There are two objects fulfilling $D(x)$ and one object for which $C(x)$ is true. So we have $C(x) \implies D(x)$, right?!
The set-theoretic counterpart of the implication is the inclusion. You have $A(x)\implies B(x)$ whenever $\{x:A(x)\} \subseteq \{x:B(x)\}$. However, the fact that the cardinality of $\{x:A(x)\}$ is less or equal to the cardinality of $\{x:B(x)\}$ does not imply $\{x:A(x)\} \subseteq \{x:B(x)\}$. That's the reason why your method does not work in the above examples.
Stephan KullaStephan Kulla
$\begingroup$ "Some downsides" is a rather mild way of putting it — the method is simply wrong, and very often leads to false conclusions. $\endgroup$ – Daniel Hast Jul 18 '16 at 20:07
The elements of the set describing a statement are things that "make the statement true." The statement "I am in Japan" is described by the set {Tokyo, Osaka, Sapporo, ...} because "I am in Osaka" makes "I am in Japan" true. The statement "$x^2=4$" is described by the set {-2,2} because "$x=-2$" makes "$x^2=4$" true.
The statement "I don't eat pork" is described by the set {vegetarian, Moslem, a person who is allergic to pork,..} because "I am a Moslem" makes "I don't eat pork" true.
The student claims that the statement "I am a vegetarian" is described by the set {pork, beef, fish, ...}. This is incorrect: "I don't eat beef" does not (necessarily) make "I am a vegetarian" true, because, for example, I could be allergic to beef but still not be a vegetarian.
Joel Reyes NocheJoel Reyes Noche
$\begingroup$ Unlike other answers, this answers the questions instead of dismissing the method. $\endgroup$ – Amy B Jul 21 '16 at 6:45
Thanks for contributing an answer to Mathematics Educators Stack Exchange!
Not the answer you're looking for? Browse other questions tagged logic teaching set-theory or ask your own question.
How important is it to show students an application of the topics seen in an undergraduate course? | CommonCrawl |
The projection of $\begin{pmatrix} 0 \\ 3 \\ z \end{pmatrix}$ onto $\begin{pmatrix} -3 \\ 5 \\ -1 \end{pmatrix}$ is
\[\frac{12}{35} \begin{pmatrix} -3 \\ 5 \\ -1 \end{pmatrix}.\]Find $z.$
The projection of $\begin{pmatrix} 0 \\ 3 \\ z \end{pmatrix}$ onto $\begin{pmatrix} -3 \\ 5 \\ -1 \end{pmatrix}$ is
\[\frac{\begin{pmatrix} 0 \\ 3 \\ z \end{pmatrix} \cdot \begin{pmatrix} -3 \\ 5 \\ -1 \end{pmatrix}}{\begin{pmatrix} -3 \\ 5 \\ -1 \end{pmatrix} \cdot \begin{pmatrix} -3 \\ 5 \\ -1 \end{pmatrix}} \begin{pmatrix} -3 \\ 5 \\ -1 \end{pmatrix} = \frac{-z + 15}{35} \begin{pmatrix} 1 \\ -2 \\ 1 \end{pmatrix}.\]Then $-z + 15 = 12,$ so $z = \boxed{3}.$ | Math Dataset |
Published April 1999,October 2009,September 2012,February 2011.
The most well-known story is a tale from when Gauss was still at primary school. One day Gauss' teacher asked his class to add together all the numbers from $1$ to $100$, assuming that this task would occupy them for quite a while. He was shocked when young Gauss, after a few seconds thought, wrote down the answer $5050$. The teacher couldn't understand how his pupil had calculated the sum so quickly in his head, but the eight year old Gauss pointed out that the problem was actually quite simple.
He had added the numbers in pairs - the first and the last, the second and the second to last and so on, observing that $1+100=101$, $2+99=101$, $3+98=101$, ...so the total would be $50$ lots of $101$, which is $5050$.
While the story may not be entirely true, it is a popular tale for maths teachers to tell because it shows that Gauss had a natural insight into mathematics. Rather than performing a great feat of mental arithmetic, Gauss had seen the structure of the problem and used it to find a short cut to a solution.
Gauss could have used his method to add all the numbers from $1$ to any number - by pairing off the first number with the last, the second number with the second to last, and so on, he only had to multiply this total by half the last number, just one swift calculation.
Can you see how Gauss's method works? Try using it to work out the total of all the numbers from $1$ to $10$. What about $1$ to $50$? The answers are at the bottom of this page.
Or why not challenge a friend to add up the numbers from $1$ to a nice large number, and then amaze them by getting the answer in seconds!
The rest of the article explains how you could use algebra to write Gauss's method - if you haven't yet learned any algebra you may wish to skip this part.
One way of presenting Gauss' method is to write out the sum twice, the second time reversing it as shown.
If we add both rows we get the sum of $1$ to $n$, but twice. Gauss added the rows pairwise - each pair adds up to $n+1$ and there are $n$ pairs, so the sum of the rows is also $n\times (n+1)$. It follows that $2\times (1+2+\ldots +n) = n\times (n+1)$, from which we obtain the formula.
Gauss' formula is a result of counting a quantity in a clever way. The problems Picturing Triangular Numbers, Mystic Rose, and Handshakes all use similar clever counting to come up with a formula for adding numbers. | CommonCrawl |
tetrahedral complexes are high spin complexes explain
For a d 3 tetrahedral configuration (assuming high spin), the CFSE = -0.8 Δ tet. (1) Borazine is an inorganic compound with the chemical formula (B 3 N 3 H 6 ). Consequently, most tetrahedral complexes, especially those of … Chemistry Chemistry: An Atoms First Approach Why do tetrahedral complex ions have a different crystal field diagram than octahedral complex ions? What is the tetrahedral crystal field diagram? Explain Why. Crystal field stabilisation energy for tetrahedral complexes is less than pairing energy. In a tetrahedral complex, Δt is relatively small even with strong-field ligands as there are fewer ligands to bond with. Coordination compounds (or complexes) are molecules and extended solids that contain bonds between a transition metal ion and one or more ligands. For M n + 3 pairing energy is 2 8 0 0 0 c m − 1, Δ 0 for [M n (C N) 6 ] 3 − is 3 8 5 0 0 c m − 1 then which of the following is/are correct. Because the low energy transition is allowed, these complexes … The spliting pattern in tetrahedral complex is just opposite of that of octahedral complex. Explain why nearly all tetrahedral complexes are high-spin. Median response time is 34 minutes and may be longer for new subjects. Explain why. The magnitude of crystal field splitting energy (CFSE) in tetrahedral Complexes is quite small and it is always less than the pairing energy.Due to this reason pairing of electron is energetically unfavorable. Crystal field stabilisation energy for tetrahedral complexes is less than pairing energy. Tetrahedral complexes, with 2//3 as many ligands binding, and all of them off-axis (reducing repulsive interactions), generally have small d-orbital splitting energies Delta_t, where Delta_t ~~ 4/9 Delta_o. And so that's why, for most touching your complexes, you'll see something like this where all five the electrons are gonna go in different orbital's before they start pairing. Coloured because of d-d transition (i. e., e 1 t 2 0 − > e 0 t 2 1) as less energy required for transition. Hence electron does not pair up to form low spin complexes So, the pairing of electrons will never be energetically favourable. EMAILWhoops, there might be a typo in your email. Usually, electrons will move up to the higher energy orbitals rather than pair. Topics. Why are there both high-spin and low-spin octahedral complexes but only high…, Why do tetrahedral complex ions have a different crystal field diagram than …, For the same type of ligands, explain why the crystal field splitting for an…, Why are atoms usually portrayed as spheres when most orbitals are not spheri…, In octahedral complexes, the choice between high-spin and low-spin electron …, For which $d^{\text {n }}$ electron configurations in a tetrahedral geometry…. And so let's consider what the splitting is for attaching angel complex. Question 20. Let me start with what causes high spin. Transition Metals and Coordination Chemistry, Transition Elements and Coordination Chemistry, So here s to understand why for tetra hydro complexes almost always there high spin. And so that's going to result in a high spin complex common with. Consequently, $\Delta$ is almost always smaller than spin pairing energy $\mathrm{P}$ , and nearly all tetrahedral complexes are high spin. (iii) CO is … The use of these splitting diagrams can aid in the prediction of the magnetic properties of coordination compounds. The energy of d-orbital is splited between eg (dx²-y² & dz²) & t2g (dxy, dyz, dxz) energy levels. Explain the crystal field diagram for square planar complex ions and for linear complex … Square planar compounds are always low-spin and therefore are weakly magnetic. This is because this requires less energy than occupying a lower energy orbital and pairing with another electron. The d-orbitals in a tetrahedral complex are interacting with only 4 ligands as opposed to six in the octahedral complex. Since the splitting $\Delta_t$ is smaller, it is usually easier to promote an electron to the higher-energy $\mathrm t_2$ orbitals, rather than to pair the electrons up in the lower-energy $\mathrm e$ orbitals. The ratio is derived in The angular overlap model.How to use it and why J. Chem. In contrast, low-spin d 6 complexes do not usually form tetrahedral complexes. Explain Why. In a tetrahedral complex, Δt is relatively small even with strong-field ligands as there are fewer ligands to bond with. Almost all tetrahedral complexes are high spin because of reduced ligand-metal interactions. View solution. Explain the crystal field diagram for square planar complex ions and for linear complex ions. [NiBra]2+ [Co(en)313- (D) Predict The Possibility Of Jahn Teller Distortion For The Following Complex Ions. The low spin tetrahedral complexes are formed because of very low CFSE which is not able to pair up the electrons. For each of the Mn complexes in the table below, give electronic configurations (within the t 2g and e g sets of 3d orbitals) that … Tetrahedral complexes are pretty common for high-spin d 6 metals, even though the 18-electron rule suggests octahedral complexes should form. The key difference between high spin and low spin complexes is that high spin complexes contain unpaired electrons, whereas low spin complexes tend to contain paired electrons.. As a result, even with strong-field ligands, the splitting energy is generally smaller than the electron pairing energy. Why are virtually all tetrahedral complex ions "high spin"? As Δ t < pairing energy, so electron occupies a higher energy orbital. Hence electron does not pair up to form low spin complexes Explain the following cases giving appropriate reasons: (i) Nickel does not form low spin octahedral complexes. Problem 111 Explain why nearly all tetrahedral complexes are high-spin. Remember that because Δ tet is less than half the size of Δ o, tetrahedral complexes are often high spin. unfavorable. This is analogous to deciding whether an octahedral complex adopts a high- or low-spin configuration; where the crystal field splitting parameter $\Delta_\mathrm{O}$, also called $10~\mathrm{Dq}$ in older literature, plays the … Tetrahedral complexes, with 2/3 as many ligands binding, and all of them off-axis (reducing repulsive interactions), generally have small d-orbital splitting energies Δt, where Δt ≈ 4 9Δo. Explain why nearly all tetrahedral complexes are high-spin. Active 4 years ago. When electron pairing energy is large, electron pairing is unfavorable. For 3d elements, Δ t is thus small compared to the pairing energy and their tetrahedral complexes are always high spin. This is sling the derivation. Hence it is paramagnetic Magnetic moment – it is paramagnetic. STATEMENT-1: Tetrahedral complexes are always high spin complexes . Such complexes are already rare in itself and I don't know if such complex exist for nickel.
STATEMENT-2: Crystal field splitting energy in tetrahedral complexes is 2/3 of the (crystal field splitting energy in octahedral complexes). In forming these coordinate covalent bonds, the metal ions act as Lewis acids and the ligands act as Lewis bases. It is rare for the Δt of tetrahedral complexes to exceed the pairing energy. The magnitude of crystal field splitting energy (CFSE) in tetrahedral Complexes is quite small and it is always less than the pairing energy. Click 'Join' if it's correct, By clicking Sign up you accept Numerade's Terms of Service and Privacy Policy, Whoops, there might be a typo in your email. I … Educ., vol. Usually, electrons will move up to the higher energy orbitals rather than pair. Why are almost all tetrahedral complexes high-spin? ISBN: 9781305079243. What is the tetrahedral crystal field diagram? When talking about all the molecular geometries, we …
STATEMENT-3: Tetrahedral complex is optically active . It is observed that, Δt = 4/9 Δ₀ The other low-spin configurations also have high CFSEs, as does the d 3 configuration. asked May 25, 2019 in Chemistry by Raees ( 73.7k points) coordination compounds Low spin tetrahedral and complexes are rarely observed, because for the same metal and same ligand. Why tetrahedral complexes do not exhibit geometrical isomerism. Question: (B) Usually, Tetrahedral Complexes Are High Spin Rather Than Low Spin. Ask Question Asked 4 years, 3 months ago. Buy Find arrow _forward. than the. For 3d metals (d 4-d 7): In general, low spin complexes occur with very strong ligands, such as cyanide. Square planar complexes are low spin as electrons tend to get paired instead of remaining unpaired. The bond angles are approximately 109.5° when all four substituents are the same. Why are there both high-spin and low-spin octahedral complexes but only high-spin tetrahedral complexes? Explain the differences between high-spin and low-spin metal complexes. Ligands which produce this effect are known as strong field ligands and form low spin complexes. The CFSE is highest for low-spin d 6 complexes, which accounts in part for the extraordinarily large number of Co(III) complexes known. Problem 111. Usually, electrons will move up to the higher energy orbitals rather than pair. Other new examples is a new Co(II) complex which is a low-spin complex with a distorted tetrahedral geometry and a chromium complex. These two type of complexes are called low-spin or spin-paired complexes. Consequently, most tetrahedral complexes, especially those of the first-row transition metals, are high-spin. Suppose each of the ions in Problem CC8.1. [NiBra]2+ [Co(en)313- (D) Predict The Possibility Of Jahn Teller Distortion For The Following Complex Ions. Tetrahedral has less number of ligands, contribution to orbital splitting is low and therefore delta O is smaller. The structure of crystalline solids is determined by packing of their constituents .In order to understand the packing of the constituen... (1) Back bonding is a type of weaker π bond which is formed by sideways overlapping of filled orbital with empty orbital present on adjace... Phosphorous is a pentavalent element hence show +3 and +5 oxidation state (d orbital presence).it form two oxide P 2 O 3 (+3) and P 2 O 5... We know that the Ligands which cause large degree of crystal filed splitting are termed as strong field ligands. Consequentially, \(\Delta_{t}\) is typically smaller than the spin pairing energy, so tetrahedral complexes are usually high spin. [MnF:14. Transition Metals. 3. The use of these splitting diagrams can aid in the prediction of magnetic properties of coordination compounds. A high spin energy splitting of a compound occurs when the energy required to pair two electrons is greater than the energy required to place an electron in a high energy state. Ligands that are commonly found in coordination complexes are neutral mol… For this reason all tetrahedral complexes are high spin; the crystal field is never large enough to overcome the spin-pairing energies. It is unknown to have a Δ tet sufficient to overcome the spin pairing energy. The following general trends can be used to predict whether a complex will be high or low spin. Descriptive Inorganic, Coordination, and Solid State Chemistry (3rd Edition) Edit edition. Explain. DETAILED EXPLANATION . 2 $\begingroup$ I am trying to calculate the relationship between the octahedral field splitting parameter ($\Delta_\mathrm{o}$) and the square planar field splitting parameter ($\Delta_\mathrm{sp}$) and … Note that we have dropped the "g" subscript because the tetrahedron does not have a center of symmetry. Tetrahedral complexes are always high spin. Problem 28 . Question: Why are tetrahedral complexes always high spin? asked May 25, 2019 in Chemistry by Raees ( 73.7k points) coordination compounds If $\Delta E < P + S$, then the complex will be tetrahedral. (c) Calculate CFSE For The Following Complexes. Due to this reason pairing of electron is energetically unfavorable. The key difference between square planar and tetrahedral complexes is that square planar complexes have a four-tiered crystal field diagram, but the tetrahedral complexes have a two-tiered crystal field diagram.. Explain the following: (i) Low spin octahedral complexes of nickel are not known. Thus all the tetrahedral Complexes are high spin Complexes. View solution. Buy Find arrow_forward. Thus, tetrahedral complexes are usually high-spin. Key Terms. 21. Problem CC8.6. Q: The ?a of a weak monoprotic acid is 1.52×10−5. Chemistry: An Atoms First Approach. Tetrahedral complexes often have vibrant colors because they lack the center of symmetry that forbids a d-d* transition. Question: Why are tetrahedral complexes always high spin? Um but what that results in is that even for a really strong field looking this splitting right here between the lower and higher energy levels isn't that great. These are called spin states of complexes. When talking about all the molecular geometries, we compare the crystal field splitting energy Δ and the pairing energy ( P ). Due to this reason pairing of electron is energetically Explain the differences between high-spin and low-spin metal complexes. Square Planar Complexes In a square planar, there are … Draw the d orbital diagrams for the high spin and the low spin case for each ion. Chemistry Chemistry Why do tetrahedral complex ions have a different crystal field diagram than octahedral complex ions? This means these complexes can be attracted to an external magnetic field. For a d 3 tetrahedral configuration (assuming high spin), the Crystal Field Stabilization Energy is \[-0.8 \Delta_{tet}\] Remember that because Δ tet is less than half the size of Δ o, tetrahedral complexes are often high spin. Explain why nearly all tetrahedral complexes are high-spin. Thus all the tetrahedral complexes are high spin complexes. As Δ t < pairing energy, so electron occupies a higher energy orbital. Normally, these two quantities determine whether a certain field is low spin or high spin. 4. 2nd Edition. Explain why in high-spin octahedral complexes, orbital contributions to the magnetic moment are only important for d^{1}, d^{2}, d^{6} and d^{7} configurations. Tetrahedral complexes are almost always high spin, whereas octahedral complexes can be either high or low spin. Magnetic property – Two unpaired electron (CL – is weak field ligand). Explain the crystal field diagram for square planar complex ions and for linear complex ions. It's probably still smaller than is first for some of the Arctic Pedro complex isn't so. The terms high spin and low spin are related to coordination complexes. Steven S. Zumdahl + 1 other. High spin complexes are coordination complexes containing unpaired electrons at high energy levels. Therefore, the energy required to pair two electrons is typically higher than the energy required for placing electrons in the higher energy orbitals. Viewed 9k times 10. Because of this, most tetrahedral complexes are high spin. Click 'Join' if it's correct. Typically, the ligand has a lone pair of electrons, and the bond is formed by overlap of the molecular orbital containing this electron pair with the d-orbitals of the metal ion. (c) Calculate CFSE For The Following Complexes. Explain. *Response times vary by subject and question complexity. Usually, tetrahedral ions are high spin rather than low spin. Usually, electrons will move up to the higher energy orbitals rather than pair. Octahedral complexes have a large crystal field splitting (Δ o) because the e g set of d-orbitals points directly at the ligands, whereas the t 2g set points in between the ligands. The, uh, splitting for a knock cathedral complex.
STATEMENT-3: Tetrahedral complex is optically active . It is rare for the \(Δ_t\) of tetrahedral complexes to exceed the pairing energy. Tetrahedral complexes have weaker splitting because none of the ligands lie within the plane of the orbitals. isolated and fully characterized the only example of a high-spin, tetrahedral complex of TiII, by using a relatively weak-field ligand, in this case Tp tBu,Me −[Tp = hydridotris(3-tert-butyl-5-methylpyrazol-1-yl)borate]. Tetrahedral complexes are almost always high spin, whereas octahedral complexes can be either high or low spin. So the value of $\Delta$ is small compared to pairing energy. 2nd Edition. [F e (C N) 6 ] − 3 is low spin complex but [F e (H 2 O) 6 ] + 3 is high spin complex. Ligands for which ∆ o < P are known as weak field ligands and form high spin complexes. Tetrahedral complexes are generally high spin. were in tetrahedral, rather than octahedral, coordimnation environments. and also called Borazole. Question: (B) Usually, Tetrahedral Complexes Are High Spin Rather Than Low Spin. and Ligands that cause onl... in tetrahedral Complexes is quite small and it is always less This is because this requires less energy than occupying a lower energy orbital and pairing with another electron. Explain why tetrahedral complexes are mostly low-spin or high-spin. Since the magnitude of crystal field splitting energy in tetrahedral field is small and always less than pairing energy. Tetrahedral complexes are high spin because electrons in the complex tend to go the higher energy levels instead of pairing with other electrons. As a result, even with strong-field ligands, the splitting energy is … Table \(\PageIndex{2}\) gives CFSE values for octahedral complexes with different d electron configurations. Define the following terms: crystal field splitting, high-spin complex, low-spin complex, spectrochemical series. Why are virtually all tetrahedral complex ions "high spin"? Now the low spin complexes are formed when a strong field ligands forms a bond with the metal or metal ion. This question has multiple correct options. Use calculations of stabilisation energies to explain why. It has to orbital's arm in the lower level and three ovals at a higher energy. For a d 3 tetrahedral configuration (assuming high spin), the CFSE = -0.8 Δ tet Remember that because Δ tet is less than half the size of Δ o , tetrahedral complexes are often high spin. Figure 24.36 Energies of the d orbitals in a tetrahedral crystal field. Square-planar complexes, in which four ligands are arranged about the metal ion in a plane, represent a common geometric form. There are two different types of outer orbital complexes. Explain the differences between high-spin and low-spin metal complexes. Why are virtually all tetrahedral complex ions "high spin"? Explain the following cases giving appropriate reasons: (i) Nickel does not form low spin octahedral complexes. Thus, tetrahedral complexes are usually high-spin. This splitting so delta T for Tetra Hydro is about 4/9. We can determine these states using crystal field theory and ligand field theory. A complex can be classified as high spin or low spin. Why are there both high-spin and low-spin octahedral complexes but only high-spin tetrahedral complexes? Chemistry Structure and Properties. Usually, octahedral and tetrahedral coordination complexes ar… Nature of the complex – high spin Ligand filled electronic configuration of central metal ion, t 2g 6 e g 6. In tetrahedral complexes four ligands occupy at four corners of tetrahedron as shown in figure. In fact no tetrahedral Complex with low spin has been found to exist. It is smaller than the pairing energy, so electrons are able to move to higher energy levels rather than pairing and are thus high spin. Tetrahedral Complexes In tetrahedral molecular geometry, a central atom is located at the center of four substituent atoms, which form the corners of a tetrahedron. The strong field ligands invariably cause pairing of electron and thus it makes some in most cases the last d-orbital empty and thus tetrahedral is not formed. Chemistry: An Atoms First Approach. That's just beyond the scope of this course. STATEMENT-1: Tetrahedral complexes are always high spin complexes . Why? Explain. The first such complex observed is cobalt norboryl complex which Aniruddha pointed out. Since the splitting $\Delta_t$is smaller, it is usually easier to promote an electron to the higher-energy $\mathrm t_2$orbitals, rather than to pair the electrons up in the lower-energy $\mathrm e$orbitals. Thus all the tetrahedral Complexes are high spin Complexes. Why do tetrahedral complexes have approximately 4/9 the field split of octahedral complexes? In bi- and polymetallic complexes, the electrons may couple through the ligands, resulting in a weak magnet, or they may enhance each other. There are no known ligands powerful enough to produce the strong-field case in a tetrahedral complex.
STATEMENT-2: Crystal field splitting energy in tetrahedral complexes is 2/3 of the (crystal field splitting energy in octahedral complexes). In a tetrahedral complex, Δ t is relatively small even with strong-field ligands as there are fewer ligands to bond with. Since they contain unpaired electrons, these high spin complexes are paramagnetic complexes. According to crystal field theory, a complex can be classified as high spin or low spin. Chemical reactions and Stoichiometry. As the ligands approaches to central metal atom or ion then degeneracy of d-orbital of central metal is removed by repulsion between electrons of metal & electrons of ligands. ii) If ∆ o > P, it becomes more energetically favourable for the fourth electron to occupy a t 2g orbital with configuration t 2g 4 e g 0. Find out what you don't know with free Quizzes Start Quiz Now! Problem 124. … (ii) The π -complexes are known for transition elements only. It is rare for the Δt of tetrahedral complexes to exceed the pairing energy. Low spin tetrahedral complexes are not formed because: View solution. Answer. In a tetrahedral complex, \(Δ_t\) is relatively small even with strong-field ligands as there are fewer ligands to bond with. Square planar complexes are low spin as electrons tend to get paired instead of remaining unpaired. Explain the differences between high-spin and low-spin metal complexes. The possibility of high and low spin complexes exists for configurations d 5-d 7 as well. Thus all the tetrahedral Complexes are high spin Complexes. But, in tetrahedral complexes, the ligands are not so closely associated with the d orbitals, as shown in this diagram: By associating the d orbitals seen in the first diagram, with the tetrahedral point charges in the second diagram, you can see how close the point charges are to the d orbitals in the octahedral case compared to tetrahedral. Splitting of d-orbitals. Splitting of d-orbitals. Sep 14, 2017. . It is rare for the Δ t of tetrahedral complexes to exceed the pairing energy. 22. Tetrahedral complexes are high spin because electrons in the complex tend to go the higher energy levels instead of pairing with other electrons. That means this after we fill two electrons, the third election is more likely to go to this higher energy level, which isn't that much higher energy than, rather than try and experience the electric electric propulsion going in the same orbital. Low spin tetrahedral complexes are not formed because for tetrahedral complexes, the crystal field stabilization energy is lower than pairing energy. Thus, tetrahedral complexes are usually high-spin. The Δ splitting energy for tetrahedral metal complexes (four ligands), Δ tet is smaller than that for an octahedral complex. Structure of "Borazine/Borazole"/inorganic Benzene: PERCENTAGE (%) AVAILABLE CHLORINE IN BLEACHING POWDER: Structure of phosphorous trioxide (P4O6) and phosphorous pentaoxide (P4O10) . Why all the tetrahedral Complexes are high spin Complexes ? [MnF:14. Let me start with what causes high spin. Here none of the orbitals are point directly at the ligands in tetrahedral geometry and because there are only four ligands instead of six, the crystal field splitting in tetrahedral complex is only about half of that in octahedral complexes. A lot of transition metals complexes follow 18 electron rule (square planar complexes are one exception). Therefore, the energy required to pair two electrons is typically higher than the energy required for placing electrons in the higher energy orbitals. Problem CC8.5. Now, in comparison to an octahedron complex. Publisher: Cengage Learning. Itself and i do n't know with free Quizzes Start Quiz Now for nickel Arctic Pedro complex is just of... Inorganic compound with the chemical formula ( B ) usually, electrons will never be energetically.! Of coordination compounds ( or complexes ) are molecules and extended solids that contain bonds between transition. For nickel why tetrahedral complexes are mostly low-spin or high-spin these complexes be! Sufficient to overcome the spin pairing energy for this reason pairing of electrons will move up to higher... When talking about all the tetrahedral complexes are paramagnetic complexes and one or more ligands are molecules and solids. Splitting for a knock cathedral complex prediction of magnetic properties of coordination compounds either high or low spin are to..., in which four ligands occupy at four corners of tetrahedron as shown in figure spin complex with. D-D * transition have weaker splitting because none of the d orbitals in tetrahedral... Is n't so and question complexity compounds are always low-spin and therefore are weakly magnetic transition.: why are there tetrahedral complexes are high spin complexes explain high-spin and low-spin octahedral complexes but only high-spin tetrahedral,. This, most tetrahedral complexes Δ O, tetrahedral complexes always high spin hence it is paramagnetic * transition complex... One or more ligands be a typo in your email are commonly in... We compare the crystal field splitting, high-spin complex, Δ t < pairing,. Opposite of that of octahedral complex and why J. Chem are low tetrahedral... Pair up the electrons sufficient to overcome the spin pairing energy therefore are weakly magnetic they the... Other low-spin configurations also have high CFSEs, as does the d orbitals a... Magnetic property – two unpaired electron ( CL – is tetrahedral complexes are high spin complexes explain field ligand ) bond angles are approximately when... Might be a typo in your email in general, low spin tetrahedral complexes for configurations d 5-d as... Forming these coordinate covalent bonds, the splitting is for attaching angel complex P S. Is paramagnetic magnetic moment – it is paramagnetic is derived in the prediction of the 3! When all four substituents are the same metal and same ligand metal complexes 6 ) for Tetra Hydro about. Or low spin complexes O, tetrahedral complexes are almost always high spin occupy at four corners of tetrahedron shown... D 4-d 7 ): in general, low spin case for each.... Molecular geometries, we compare the crystal field stabilisation energy for tetrahedral complexes tend get. Pair two electrons is typically higher than the electron pairing energy is lower than pairing energy compounds... Lot of transition metals, even with strong-field ligands as there are fewer to. * transition, uh, splitting for a knock cathedral complex – it is rare for the Δt of complexes. Spin tetrahedral and complexes are not formed because: View solution for some of the orbitals your... As a result, even with strong-field ligands as there are no ligands... 'S going to result in a tetrahedral complex is optically active lack the center of symmetry pretty for! And why J. Chem which produce this effect are known for transition elements only though 18-electron! The tetrahedral complexes have weaker splitting because none of the magnetic properties of coordination compounds ( complexes... Smaller than is first for some of the ligands act as Lewis acids and pairing! Normally, these two quantities determine whether a certain field is never large enough to overcome spin! Ovals at a higher energy orbital commonly found in coordination complexes for attaching complex! Have approximately 4/9 the field split of octahedral complex and complexes are low tetrahedral! Of nickel are not known this means these complexes can be either high or low.. Because they lack the center of symmetry that forbids a d-d * transition between eg ( &... Using crystal field diagram for square planar complexes are called low-spin or high-spin STATEMENT-3: tetrahedral complex low... Delta t for Tetra Hydro is tetrahedral complexes are high spin complexes explain 4/9 Calculate CFSE for the following general trends can be as! First-Row transition metals, are high-spin low spin a d-d * transition + S $ then. Δ_T\ ) of tetrahedral complexes ion in a tetrahedral complex, low-spin complex, low-spin 6! Are already rare in itself and i do n't know if such complex exist for nickel )... This course value of $ \Delta E < P + S $, then the complex tend to the! Different types of outer orbital complexes low-spin d 6 metals tetrahedral complexes are high spin complexes explain are high-spin move! Of pairing with another electron of a weak monoprotic acid is 1.52×10−5 paired instead of pairing another. Rule suggests octahedral complexes can be used to predict whether a complex be... When all four substituents are the same metal and same ligand: View solution,! This reason pairing of electron is energetically unfavorable to six in the complex will be tetrahedral in general, spin... Chemistry ( 3rd Edition ) Edit Edition because they lack the center of symmetry that forbids a d-d transition. 6 metals, are high-spin g '' subscript because the tetrahedron does not form low spin complexes. The possibility of high and low spin tetrahedral complexes is less than pairing energy electrons in the higher orbitals., low spin complexes occur with very strong ligands, tetrahedral complexes are high spin complexes explain to orbital is! Mostly low-spin or high-spin with another electron the first such complex observed is cobalt norboryl complex which Aniruddha out... And Solid State Chemistry ( 3rd Edition ) Edit Edition complexes do not form... Because for the Δt of tetrahedral complexes have approximately 4/9 the field split of complexes! Covalent bonds, the crystal field diagram for square planar compounds are always high spin probably., \ ( \PageIndex { 2 } \ ) gives CFSE values for octahedral complexes can be either or. Electron occupies a higher energy orbitals rather than pair about 4/9, dyz, dxz ) energy levels electron. To result in a plane, represent a common geometric form is derived in the angular overlap model.How to it! Pairing with other electrons have vibrant colors because they lack the center symmetry! Are the same some of the magnetic properties of coordination compounds d 5-d 7 as.! Energies of the orbitals two electrons is typically higher than the electron pairing energy suggests. Spin complexes occur with very strong ligands, the pairing energy, so electron occupies higher... 'S arm in the prediction of magnetic properties of coordination compounds other low-spin also! Two type of complexes are high spin complexes just beyond the scope of course... The same metal and same ligand or low spin complexes occur with very strong ligands such... Cases giving tetrahedral complexes are high spin complexes explain reasons: ( i ) nickel does not form low octahedral! To produce the strong-field case in a tetrahedral crystal field splitting, high-spin,... Ii ) the π -complexes are known for transition elements only ( dx²-y² & dz² &. Different d electron configurations the Δ t < pairing energy is just of! Minutes and may be longer tetrahedral complexes are high spin complexes explain new subjects pair two electrons is typically higher than the pairing. ( or complexes ) are molecules and extended solids that contain bonds between a transition ion. Case in a tetrahedral complex ions required for placing electrons in the level! Octahedral, coordimnation environments is large, electron pairing is unfavorable to pairing energy metal and same ligand CFSE for. Delta t for Tetra Hydro is about 4/9 such complexes are paramagnetic complexes, low spin the lower level three. Field is low spin very strong ligands, contribution to orbital splitting is attaching! A higher energy orbitals are one exception ) none of the magnetic properties of coordination compounds six! The value of $ \Delta E < P + S $, then the complex tend to paired... Of these splitting diagrams can aid in the prediction of the magnetic properties of coordination.. Derived in the complex – high spin and low spin has been found to exist magnetic moment it... Result in a high spin or low spin or low spin has been found exist! Attracted to an external magnetic field exceed the pairing energy, so electron occupies higher! Of coordination compounds ( or complexes ) are molecules and extended solids that contain bonds between transition... Are formed because for tetrahedral complexes are high spin and the low spin are related to complexes. The magnetic properties of coordination compounds no tetrahedral complex ions in which four ligands occupy at four of. Than pairing energy is … question: ( i ) nickel does not form low spin case for each.. An tetrahedral complexes are high spin complexes explain compound with the chemical formula ( B ) usually, electrons will move up to the higher orbital! Low-Spin configurations also have high CFSEs, as does the d 3 configuration the spliting pattern in tetrahedral, than. In your email just opposite of that of octahedral complex complexes containing unpaired electrons, these two determine! This requires less energy than occupying a lower energy orbital thus all the complexes. Are often high spin at high energy levels so, the energy of d-orbital is splited between eg ( &! Dx²-Y² & dz² ) & t2g ( dxy, dyz, dxz ) energy levels occupy at four corners tetrahedron! Diagram for square planar complexes are high spin, whereas octahedral complexes > STATEMENT-3 tetrahedral. Have weaker splitting because none of the d orbitals in a tetrahedral complex are interacting only! Are virtually all tetrahedral complex with low spin tetrahedral complexes are mostly low-spin or high-spin not low... D 4-d 7 ): in general, low spin pretty common for high-spin d 6 metals, are.! These complexes can be either high or low spin d-d * transition in general, low spin complexes...? a of a weak monoprotic acid is 1.52×10−5 with free Quizzes Start Quiz Now low.
Catholic Community Services Utah Refugee, Diy Aquarium Sump Kit, Solvent Based Water Sealer, Diy Aquarium Sump Kit, Robert Porcher Wife, Thomas Nelson Trade Programs, University Of Saskatchewan Ranking 2020, 2003 Mazda Protege Repair Manual Pdf,
tetrahedral complexes are high spin complexes explain 2021 | CommonCrawl |
\begin{definition}[Definition:Connected Domain (Complex Analysis)]
Let $D \subseteq \C$ be a subset of the set of complex numbers.
Then $D$ is a '''connected domain''' {{iff}} $D$ is open and connected.
\end{definition} | ProofWiki |
Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source
Goldstein-Wentzell boundary conditions: Recent results with Jerry and Gisèle Goldstein
February 2014, 34(2): 761-787. doi: 10.3934/dcds.2014.34.761
Semi-linear elliptic and elliptic-parabolic equations with Wentzell boundary conditions and $L^1$-data
Paul Sacks 1, and Mahamadi Warma 2,
Iowa State University, Department of Mathematics, 396 Carver Hall, Ames, IA 50011, United States
University of Puerto Rico, Rio Piedras Campus, Department of Mathematics, P.O. Box 70377, San Juan PR 00936-8377
Received January 2013 Revised May 2013 Published August 2013
Let $Ω\subset\mathbb{R}^N$ ($N\ge 2$) be a bounded domain with a boundary $∂Ω$ of class $C^2$ and let $\alpha,\beta$ be maximal monotone graphs in $\mathbb{R}^2$ satisfying $\alpha(0)\cap\beta(0)\ni 0$. Given $f\in L^1(Ω)$ and $g\in L^1(∂Ω)$, we characterize the existence and uniqueness of weak solutions to the semi-linear elliptic equation $-\Delta u+\alpha(u)\ni f$ in $Ω$ with the nonlinear general Wentzell boundary conditions $-\Delta_{\Gamma} u+\frac{\partial u}{\partial\nu}+\beta(u)\ni g$ on $∂Ω$. We also show the well-posedness of the associated parabolic problem on the Banach space $L^1(Ω)\times L^1(∂Ω)$.
Keywords: existence of weak solutions, Semi-linear elliptic equations, nonlinear Wentzell boundary conditions, elliptic-parabolic equations, mild solutions..
Mathematics Subject Classification: 35J60, 35J65, 35D0.
Citation: Paul Sacks, Mahamadi Warma. Semi-linear elliptic and elliptic-parabolic equations with Wentzell boundary conditions and $L^1$-data. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 761-787. doi: 10.3934/dcds.2014.34.761
T. Aiki, Multi-dimensional two-phase Stefan problems with nonlinear dynamic boundary conditions, in "Nonlinear Analysis and Applications" (Warsaw, 1994), GAKUTO Internat. Ser. Math. Sci. Appl., 7, Gakkōtosho, Tokyo, (1996), 1-25. Google Scholar
F. Andreu, J. M. Mazón, S. Segura de León and J. Toledo, Quasi-linear elliptic and parabolic equations in $L^1$ with nonlinear boundary conditions, Adv. Math. Sci. Appl., 7 (1997), 183-213. Google Scholar
F. Andreu, N. Igbida, J. M. Mazón and J. Toledo, A degenerate elliptic-parabolic problem with nonlinear dynamical boundary conditions, Interfaces Free Bound. 8 (2006), 447-479. doi: 10.4171/IFB/151. Google Scholar
F. Andreu, N. Igbida, J. M. Mazón and J. Toledo, $L^ 1$ existence and uniqueness results for quasi-linear elliptic equations with nonlinear boundary conditions, Ann. Inst. H. Poincaré Anal. Non Linéaire, 24 (2007), 61-89. doi: 10.1016/j.anihpc.2005.09.009. Google Scholar
F. Andreu, J. M. Mazón, S. Segura de León and J. Toledo, Existence and uniqueness for a degenerate parabolic equation with $L^1$-data, Trans. Amer. Math. Soc., 351 (1999), 285-306. doi: 10.1090/S0002-9947-99-01981-9. Google Scholar
Ph. Bénilan, H. Brezis and M. G. Crandall, A semilinear equation in $L^1(\mathbbR^N)$, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 2 (1975), 523-555. Google Scholar
Ph. Bénilan and M. G. Crandall, Completely accretive operators, in "Semigroup Theory and Evolution Equations" (Delft, 1989), Lecture Notes in Pure and Appl. Math., 135, Dekker, New York, (1991), 41-75. Google Scholar
Ph. Bénilan, M. G. Crandall and P. Sacks, Some $L^1$ existence and dependence results for semilinear elliptic equations under nonlinear boundary conditions, Appl. Math. Optim., 17 (1988), 203-224. doi: 10.1007/BF01448367. Google Scholar
H. Brézis, Problémes unilatéraux, J. Math. Pures Appl. (9), 51 (1972), 1-168. Google Scholar
H. Brézis and A. Haraux, Image d'une somme d'opérateurs monotones et applications, Israel J. Math., 23 (1976), 165-186. doi: 10.1007/BF02756796. Google Scholar
H. Brézis and W. A. Strauss, Semi-linear second-order elliptic equations in $L^1$, J. Math. Soc. Japan, 25 (1973), 565-590. doi: 10.2969/jmsj/02540565. Google Scholar
M. G. Crandall, An introduction to evolution governed by accretive operators, in "Dynamical Systems" (Proc. Internat. Sympos., Brown Univ., Providence, R.I., 1974), Vol. I, Academic Press, New York, (1976), 131-165. Google Scholar
M. G. Crandall, Nonlinear semigroups and evolution governed by accretive operators, in "Nonlinear Functional Analysis and its Applications, Part 1" (Berkeley, Calif., 1983), Proc. Sympos. Pure Math., 45, Part 1, Amer. Math. Soc., Providence, RI, (1986), 305-337. Google Scholar
J. Crank, "Free and Moving Boundary Problems," The Clarendon Press, Oxford University Press, New York, 1987. Google Scholar
R. Dautray and J.-L. Lions, "Mathematical Analysis and Numerical Methods for Sciences and Technology. Vol. 1. Physical Origins and Classical Methods," Springer-Verlag, Berlin, 1990. Google Scholar
E. B. Davies, "Heat Kernels and Spectral Theory," Cambridge Tracts in Mathematics, 92, Cambridge University Press, Cambridge, 1989. doi: 10.1017/CBO9780511566158. Google Scholar
E. DiBenedetto and A. Friedman, The ill-posed Hele-Shaw model and the Stefan problem for supercooled water, Trans. Amer. Math. Soc., 282 (1984), 183-204. doi: 10.2307/1999584. Google Scholar
P. Drábek and J. Milota, "Methods of Nonlinear Analysis. Applications to Differential Equations," Birkhäuser Advanced Texts: Basler Lehrbücher [Birkhäuser Advanced Texts: Basel Textbooks], Birkhäuser Verlag, Basel, 2007. Google Scholar
G. Duvaut and J.-L. Lions, "Inequalities in Mechanics and Physics," Grundlehren der Mathematischen Wissenschaften, 219, Springer-Verlag, Berlin-New York, 1976. Google Scholar
L. C. Evans, Application of nonlinear semigroup theory to certain partial differential equations, in "Nonlinear Evolution Equations" (Proc. Sympos., Univ. Wisconsin, Madison, Wis., 1977), Publ. Math. Res. Center Univ. Wisconsin, 40, Academic Press, New York-London, (1978), 163-188. Google Scholar
A. Favini, G. R. Goldstein, J. A. Goldstein, E. Obrecht and S. Romanelli, Elliptic operators with general Wentzell boundary conditions, analytic semigroups and the angle concavity theorem, Math. Nachr., 283 (2010), 504-521. doi: 10.1002/mana.200910086. Google Scholar
A. Favini, G. R. Goldstein, J. A. Goldstein and S. Romanelli, The heat equation with nonlinear general Wentzell boundary condition,, Adv. Differential Equations, 11 (2006), 481-510. Google Scholar
C. G. Gal, G. Goldstein, J. A. Goldstein, S. Romanelli and M. Warma, Fredholm alternative, semilinear elliptic problems, and Wentzell boundary conditions,, preprint., (). Google Scholar
C. G. Gal and M. Warma, Nonlinear elliptic boundary value problems at resonance with nonlinear Wentzell-Robin type boundary conditions,, preprint, (). Google Scholar
N. Igbida and M. Kirane, A degenerate diffusion problem with dynamical boundary conditions, Math. Ann., 323 (2002), 377-396. doi: 10.1007/s002080100308. Google Scholar
D. Kinderlehrer and G. Stampacchia, "An Introduction to Variational Inequalities and their Applications," Pure and Applied Mathematics, 88, Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London, 1980. Google Scholar
R. E. Showalter, "Monotone Operators in Banach Space and Nonlinear Partial Differential Equations," Mathematical Surveys and Monographs, 49, Amer. Math. Soc., Providence, RI, 1997. Google Scholar
M. Warma, An ultracontractivity property for semigroups generated by the $p$-Laplacian with nonlinear Wentzell-Robin boundary conditions, Adv. Differential Equations, 14 (2009), 771-800. Google Scholar
M. Warma, Regularity and well-posedness of some quasi-linear elliptic and parabolic problems with nonlinear general Wentzell boundary conditions on nonsmooth domains, Nonlinear Analysis, 75 (2012), 5561-5588. doi: 10.1016/j.na.2012.05.004. Google Scholar
M. Warma, Parabolic and elliptic problems with general Wentzell boundary conditions on Lipschitz domains, Commun. Pure Appl. Anal., 12 (2013), 1881-1905. doi: 10.3934/cpaa.2013.12.1881. Google Scholar
M. Warma, Semi linear parabolic equations with nonlinear general Wentzell boundary conditions, Discrete Contin. Dynam. Systems, 33 (2013), 5493-5506. doi: 10.3934/dcds.2013.33.5493. Google Scholar
Li Ma, Lin Zhao. Regularity for positive weak solutions to semi-linear elliptic equations. Communications on Pure & Applied Analysis, 2008, 7 (3) : 631-643. doi: 10.3934/cpaa.2008.7.631
Jesus Idelfonso Díaz, Jean Michel Rakotoson. On very weak solutions of semi-linear elliptic equations in the framework of weighted spaces with respect to the distance to the boundary. Discrete & Continuous Dynamical Systems, 2010, 27 (3) : 1037-1058. doi: 10.3934/dcds.2010.27.1037
Mahamadi Warma. Semi linear parabolic equations with nonlinear general Wentzell boundary conditions. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 5493-5506. doi: 10.3934/dcds.2013.33.5493
Noriaki Yamazaki. Doubly nonlinear evolution equations associated with elliptic-parabolic free boundary problems. Conference Publications, 2005, 2005 (Special) : 920-929. doi: 10.3934/proc.2005.2005.920
Dagny Butler, Eunkyung Ko, Eun Kyoung Lee, R. Shivaji. Positive radial solutions for elliptic equations on exterior domains with nonlinear boundary conditions. Communications on Pure & Applied Analysis, 2014, 13 (6) : 2713-2731. doi: 10.3934/cpaa.2014.13.2713
Junichi Harada, Mitsuharu Ôtani. $H^2$-solutions for some elliptic equations with nonlinear boundary conditions. Conference Publications, 2009, 2009 (Special) : 333-339. doi: 10.3934/proc.2009.2009.333
Masataka Shibata. Multiplicity of positive solutions to semi-linear elliptic problems on metric graphs. Communications on Pure & Applied Analysis, 2021, 20 (12) : 4107-4126. doi: 10.3934/cpaa.2021147
Xia Huang. Stable weak solutions of weighted nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2014, 13 (1) : 293-305. doi: 10.3934/cpaa.2014.13.293
Hua Chen, Nian Liu. Asymptotic stability and blow-up of solutions for semi-linear edge-degenerate parabolic equations with singular potentials. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 661-682. doi: 10.3934/dcds.2016.36.661
Shu Luan. On the existence of optimal control for semilinear elliptic equations with nonlinear neumann boundary conditions. Mathematical Control & Related Fields, 2017, 7 (3) : 493-506. doi: 10.3934/mcrf.2017018
Nguyen Thieu Huy, Vu Thi Ngoc Ha, Pham Truong Xuan. Boundedness and stability of solutions to semi-linear equations and applications to fluid dynamics. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2103-2116. doi: 10.3934/cpaa.2016029
Meng Qu, Jiayan Wu, Ting Zhang. Sliding method for the semi-linear elliptic equations involving the uniformly elliptic nonlocal operators. Discrete & Continuous Dynamical Systems, 2021, 41 (5) : 2285-2300. doi: 10.3934/dcds.2020362
Anne Mund, Christina Kuttler, Judith Pérez-Velázquez. Existence and uniqueness of solutions to a family of semi-linear parabolic systems using coupled upper-lower solutions. Discrete & Continuous Dynamical Systems - B, 2019, 24 (10) : 5695-5707. doi: 10.3934/dcdsb.2019102
Peiying Chen. Existence and uniqueness of weak solutions for a class of nonlinear parabolic equations. Electronic Research Announcements, 2017, 24: 38-52. doi: 10.3934/era.2017.24.005
Ryuji Kajikiya, Daisuke Naimen. Two sequences of solutions for indefinite superlinear-sublinear elliptic equations with nonlinear boundary conditions. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1593-1612. doi: 10.3934/cpaa.2014.13.1593
Y. Kabeya, Eiji Yanagida, Shoji Yotsutani. Canonical forms and structure theorems for radial solutions to semi-linear elliptic problems. Communications on Pure & Applied Analysis, 2002, 1 (1) : 85-102. doi: 10.3934/cpaa.2002.1.85
Hung Le. Elliptic equations with transmission and Wentzell boundary conditions and an application to steady water waves in the presence of wind. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3357-3385. doi: 10.3934/dcds.2018144
Houda Mokrani. Semi-linear sub-elliptic equations on the Heisenberg group with a singular potential. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1619-1636. doi: 10.3934/cpaa.2009.8.1619
Zhijun Zhang. Large solutions of semilinear elliptic equations with a gradient term: existence and boundary behavior. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1381-1392. doi: 10.3934/cpaa.2013.12.1381
Wanwan Wang, Hongxia Zhang, Huyuan Chen. Remarks on weak solutions of fractional elliptic equations. Communications on Pure & Applied Analysis, 2016, 15 (2) : 335-340. doi: 10.3934/cpaa.2016.15.335
Paul Sacks Mahamadi Warma | CommonCrawl |
a solution exists (the problem is finding a closed-form expiration).
$a,b,v,u$ are parameters such that $0<a<b<1$, $v>0$, $u>0$.
Even an approximation for the solution will help. Since an expression is needed then numerical methods are not helpful here.
For numerical solution, I would start here (it's a reasonable expression where you can at least estimate the number and nature of solutions). Since you are asking for an analytical solution, this won't help much, because $p$ can be in principle anything from $1$ to $\infty$. Unless you have any other hints of the values -- if $p$ is very big, you can probably ignore the first term and get an analytical approximation. If $p$ is very close to $1$ ($a$ and $b$ very close together), you can do series expansion of all terms.
where of course you would have the painful problem of differentiating the $p$ term in the exponent. Unfortunately, I tried and even this equation contains combination $u\ln u$ and is therefore not solvable in terms of standard functions (you need Lambert's W function).
Not the answer you're looking for? Browse other questions tagged calculus real-analysis exponential-function roots approximation or ask your own question.
Are computer able to implement a algorithm theoretically to determine if a single variable integrals have closed form?
Analytical solution of a single reservoir system with exponential outflow? | CommonCrawl |
Jump to accessibility statement Jump to content
The UK's European university
Visits and Open Days
Degrees that offer inspirational teaching and great prospects.
Master's and PhDs within a world-leading research environment.
A variety of part-time degrees, short courses and summer schools.
Discover our welcoming community for international students.
Top-ranking university
TEF Gold Award
Meet some of the experts who are producing world-class research.
Supporting research: our facilities, funding and partnerships.
Capitalising on research: our commitment to enterprise.
PhD and research courses
Develop your specialist knowledge in a specific field of study.
European postgraduate centres
Public events: films, theatre, dance, art, lectures, sport and festivals.
A guide to Canterbury, Medway and our four European centres.
Our wider community
Our work to connect with local, national and international communities.
How our expertise and facilities can support your business.
Kent Extra: employability skills
School of Physical Sciences
Dr Gunnar Möller
Royal Society University Research Fellow
[email protected]
Room 230D
CT2 7NH
Physics of Quantum Materials (PQM)
Dr Gunnar Möller is a condensed matter theorist with an interest in strongly correlated and topologically ordered materials. Thanks to his expertise in state-of the art computer simulations, he currently holds a Royal Society University Research Fellowship, which allows him to further develop ambitious new numerical approaches to study superconductors or heavy fermion compounds.
Gunnar graduated with a French Master's degree (Diplome d'Etudes Approfondies) in Theoretical Physics from the Ecole Normale Supérieure in Paris (2003). He pursued his PhD at the University of Paris XI as a member of the Laboratoire de Physique Théorique et Modèles Statistiques. During his doctoral studies, he also visited Professor Steve Simon's group at the Bell Laboratories, Murray Hill, NJ.
On completion of his PhD in 2006, Gunnar moved to Cambridge, UK, to take up a postdoctoral position in the group of Professor Nigel Cooper. He developed his personal line of research on strongly correlated phases of matter as a Research Fellow at the Cavendish Laboratory thanks to the support of several prestigious fellowship awards, including a Trinity Hall Research Fellowship (2008-2011), a Leverhulme Early Career Fellowship (2011-2013), and a Royal Society University Research Fellowship (2013-2016). His international collaborations were supported by an ICAM Fellowship for a collaboration with Professor Victor Gurarie at UCO Boulder (2008-10), and by a CNRS visiting researcher position for collaboration with Dr Jerome Dubail, Université de Lorraine, Nancy, France (2015-16).
Gunnar joined the faculty at the University of Kent in May 2016.
Dr Gunnar Möller's research revolves around the interplay of strong interactions and
topology, which can give rise to collective phases of matter with exciting new properties. Some of the most interesting topological phenomena can be found in two-dimensional quantum systems. A prominent example is fractional quantum Hall phases of electrons in strong magnetic fields, which realise fractionalised quasiparticles that carry fractions of the charge of an electron. More excitingly, they can also carry so-called non-Abelian exchange statistics which allow one to
manipulate the many-body quantum state in a well-defined manner through the controlled movement of quasiparticles, providing an ideal platform for quantum computation. In practice, Gunnar's work covers two main aspects of investigation, as outlined below.
Developing high-performance numerical simulations of strongly correlated materials
Gunnar develops new computer simulations of various different flavours. His approach relies on combining analytical insights into collective properties of emergent phases at low temperatures on one hand, with quantitative modelling techniques for microscopic correlations on the other hand. This combination provides powerful tools which can give insights into a wide range of strongly correlated materials, spanning topics such as correlated superconductors and frustrated magnetism.
Exact diagonalisation: Gunnar is a main developer of the DiagHam library for simulations of spin systems and fractional quantum Hall physics.
Variational quantum Monte Carlo: Gunnar's group has explored the physics of fractional quantum Hall states using a range of variational QMC techniques, such as energy and variance minimisation.
Diagrammatic Monte Carlo: Perturbative expansions in quantum field theories can be represented graphically by Feynman diagrams. Gunnar's group uses stochastic sampling techniques in the space of Feynman graphs to analyse the properties of novel quantum phases, exploring the physics of unitary Fermi gases and the Hubbard model.
Matrix / tensor product states: Insights from quantum information theory have given rise to new tools for simulating strongly interacting matter. In particular, topological phases are well suited for descriptions in terms of their local entanglement. Gunnar and his team exploit this property to develop numerical approaches capturing the physics of fractional topological insulators.
Realising novel phases of matter
Part of Gunnar's research focuses on realising exciting new phases by 'quantum engineering', using the tools of materials science, or cold atomic gases. Examples include the creation of systems with synthetic magnetic fields, which arise from strain or spin-orbit coupling in solid state materials, and can also be generated using light-matter coupling for cold atomic gases.
As a theorist, Gunnar is most interested in using such models to explore new types of topological phases such as fractional Chern insulators, topological superfluids, as well as new types of symmetry breaking phases such as supersolid phases.
Caio, M., Möller, G., Cooper, N. and Bhaseen, M. (2019). Topological Marker Currents in Chern Insulators. Nature Physics [Online] 15:257-261. Available at: http://dx.doi.org/10.1038/s41567-018-0390-7.
View in KAR View full text
Topological states of matter exhibit many novel properties due to the presence of robust topological invariants such as the Chern index. These global characteristics pertain to the system as a whole and are not locally defined. However, local topological markers can distinguish between topological phases, and they can vary in space. In equilibrium, we show that the topological marker can be used to extract the critical behaviour of topological phase transitions. Out of equilibrium, we show that the topological marker spreads via a flow of currents emanating from the sample boundaries, and with a bounded maximum propagation speed. We discuss the possibilities for measuring the topological marker and its flow in experiment.
Möller, G. and Cooper, N. (2018). Synthetic Gauge Fields for Lattices with Multi-Orbital Unit Cells: Routes towards a $\pi$-flux Dice Lattice with Flat Bands. New Journal of Physics [Online] 20. Available at: https://doi.org/10.1088/1367-2630/aad134.
We propose a general strategy for generating synthetic magnetic fields in complex lattices with non-trivial connectivity based on light-matter coupling in cold atomic gases. Our approach starts from an underlying optical flux lattice in which a synthetic magnetic field is generated by coupling several internal states. Starting from a high symmetry optical flux lattice, we superpose a scalar potential with a super- or sublattice period in order to eliminate links between the original lattice sites. As an alternative to changing connectivity, the approach can also be used to create or remove lattice sites from the underlying parent lattice. To demonstrate our concept, we consider the dice lattice geometry as an explicit example, and construct a dice lattice with a flux density of half a flux quantum per plaquette, providing a pathway to flat bands with a large band gap. While the intuition for our proposal stems from the analysis of deep optical lattices, we demonstrate that the approach is robust even for shallow optical flux lattices far from the tight-binding limit.
We also provide an alternative experimental proposal to realise a synthetic gauge field in a fully frustrated dice lattice based on laser-induced hoppings along individual bonds of the lattice, again involving a superlattice potential. In this approach, atoms with a long-lived excited state are trapped using an 'anti-magic' wavelength of light, allowing the desired complex hopping elements to be induced in a specific laser coupling scheme for the dice lattice geometry.
We conclude by comparing the complexity of these alternative approaches, and advocate that complex optical flux lattices provide the more elegant and easily generalisable strategy.
Andrews, B. and Möller, G. (2018). Stability of Fractional Chern Insulators in the Effective Continuum Limit of |C|>1 Harper-Hofstadter Bands. Physical Review B: Condensed Matter and Materials Physics [Online] 97. Available at: http://dx.doi.org/10.1103/PhysRevB.97.035159.
We study the stability of composite fermion fractional quantum Hall states in Harper-Hofstadter bands with Chern number |C|>1. We analyze the states of the composite fermion series for bosons with contact interactions and (spinless) fermions with nearest-neighbor interactions. We examine the scaling of the many-body gap as the bands are tuned to the effective continuum limit n??1/|C|. Near these points, the Hofstadter model realises large magnetic unit cells that yield bands with perfectly flat dispersion and Berry curvature. We exploit the known scaling of energies in the effective continuum limit in order to maintain a fixed square aspect ratio in finite-size calculations. Based on exact diagonalization calculations of the band-projected Hamiltonian, we show that almost all finite-size spectra yield the ground state degeneracy predicted by composite fermion theory. We confirm that states at low ranks in the composite fermion hierarchy are the most robust, and yield a clear gap in the thermodynamic limit. For bosons in |C|=2 and |C|=3 bands, our data for the composite fermion states are compatible with a finite gap in the thermodynamic limit. We also report new evidence for gapped incompressible states of fermions in |C|>1 bands, which have large entanglement gaps. For cases with a clear spectral gap, we confirm that the thermodynamic limit commutes with the effective continuum limit. We analyze the nature of the correlation functions for the Abelian composite fermion states and find that they feature |C|2 smooth sheets. We examine two cases associated with a bosonic integer quantum Hall effect (BIQHE): for ?=2 in |C|=1 bands, we find a strong competing state with a higher ground state degeneracy, so no clear BIQHE is found in the band-projected Hofstadter model; for ?=1 in |C|=2 bands, we present additional data confirming the existence of a BIQHE state.
Liu, Z., Möller, G. and Bergholtz, E. (2017). Exotic Non-Abelian Topological Defects in Lattice Fractional Quantum Hall States. Physical Review Letters [Online] 119. Available at: http://dx.doi.org/10.1103/PhysRevLett.119.106801.
We investigate extrinsic wormhole-like twist defects that effectively increase the genus of space in lattice versions of multi-component fractional quantum Hall systems. Although the original band structure is distorted by these defects, leading to localized midgap states, we find that a new lowest flat band representing a higher genus system can be engineered by tuning local single-particle potentials. Remarkably, once local many-body interactions in this new band are switched on, we identify various Abelian and non-Abelian fractional quantum Hall states, whose ground-state degeneracy increases with the number of defects, i.e, with the genus of space. This sensitivity of topological degeneracy to defects provides a "proof of concept" demonstration that genons, predicted by topological field theory as exotic non-Abelian defects tied to a varying topology of space, do exist in realistic microscopic models. Specifically, our results indicate that genons could be created in the laboratory by combining the physics of artificial gauge fields in cold atom systems with already existing holographic beam shaping methods for creating twist defects.
Sendetskyi, O., Anghinolfi, L., Scagnoli, V., Möller, G., Leo, N., Alberca, A., Kohlbrecher, J., Lüning, J., Staub, U. and Heyderman, L. (2016). Magnetic diffuse scattering in artificial kagome spin ice. Physical Review B [Online] 93:224413. Available at: http://dx.doi.org/10.1103/PhysRevB.93.224413.
The study of magnetic correlations in dipolar-coupled nanomagnet systems with synchrotron X-ray scattering provides a means to uncover emergent phenomena and exotic phases, in particular in systems with thermally active magnetic moments. From the diffuse signal of soft X-ray resonant magnetic scattering, we have measured magnetic correlations in a highly dynamic artificial kagome spin ice with sub-70 nm Permalloy nanomagnets. On comparing experimental scattering patterns with Monte Carlo simulations based on a needle-dipole model, we conclude that kagome ice I phase correlations exist in our experimental system even in the presence of moment fluctuations, which is analogous to bulk spin ice and spin liquid behavior. In addition, we describe the emergence of quasi-pinch points in the magnetic diffuse scattering in the kagome ice I phase. These quasi-pinch points bear similarities to the fully developed pinch points with singularities of a magnetic Coulomb phase, and continually evolve into the latter on lowering the temperature. The possibility to measure magnetic diffuse scattering with soft X-rays opens the way to study magnetic correlations in a variety of nanomagnetic systems.
Jackson, T., Möller, G. and Roy, R. (2015). Geometric stability of topological lattice phases. Nature Communications [Online] 6:8629. Available at: http://dx.doi.org/10.1038/ncomms9629.
The fractional quantum Hall (FQH) effect illustrates the range of novel phenomena which can arise in a topologically ordered state in the presence of strong interactions. The possibility of realizing FQH-like phases in models with strong lattice effects has attracted intense interest as a more experimentally accessible venue for FQH phenomena which calls for more theoretical attention. Here we investigate the physical relevance of previously derived geometric conditions which quantify deviations from the Landau level physics of the FQHE. We conduct extensive numerical many-body simulations on several lattice models, obtaining new theoretical results in the process, and find remarkable correlation between these conditions and the many-body gap. These results indicate which physical factors are most relevant for the stability of FQH-like phases, a paradigm we refer to as the geometric stability hypothesis, and provide easily implementable guidelines for obtaining robust FQH-like phases in numerical or real-world experiments.
Möller, G. and Cooper, N. (2015). Fractional Chern Insulators in Harper-Hofstadter Bands with Higher Chern Number. Physical Review Letters [Online] 115:126401. Available at: http://dx.doi.org/10.1103/PhysRevLett.115.126401.
The Harper-Hofstadter model provides a fractal spectrum containing topological bands of any integer Chern number, C.
We study the many-body physics that is realized by interacting particles occupying Harper-Hofstadter bands with |C|>1. We formulate the predictions of Chern-Simons or composite fermion theory in terms of the filling factor, $\nu$, defined as the ratio of particle density to the number of single-particle states per unit area. We show that this theory predicts a series of fractional quantum Hall states with filling factors nu = r/(r|C| +1) for bosons, or nu = r/(2r|C| +1) for fermions. This series includes a bosonic integer quantum Hall state (bIQHE) in |C|=2 bands. We construct specific cases where a single band of the Harper-Hofstadter model is occupied. For these cases, we provide numerical evidence that several states in this series are realized as incompressible quantum liquids for bosons with contact interactions.
Möller, G., Hormozi, L., Slingerland, J. and Simon, S. (2014). Josephson-coupled Moore-Read states. Physical Review B: Condensed Matter and Materials Physics [Online] 90:235101. Available at: http://dx.doi.org/10.1103/PhysRevB.90.235101.
We study a quantum Hall bilayer system of bosons at total filling factor nu = 1, and study the phase that results from short ranged pair-tunneling combined with short ranged interlayer interactions.
We introduce two exactly solvable model Hamiltonians which both yield the coupled Moore-Read state [Phys. Rev. Lett. 108, 256809 (2012)] as a ground state, when projected onto fixed particle numbers in each layer. One of these Hamiltonians describes a gapped topological phase while the other is gapless. However, on introduction of a pair tunneling term, the second system becomes gapped and develops the same topological order as the gapped Hamiltonian. Supported by the exact solution of the full zero-energy quasihole spectrum and a conformal field theory approach, we develop an intuitive picture of this system as two coupled composite fermion superconductors. In this language, pair tunneling provides a Josephson coupling of the superconducting phases of the two layers, and gaps out the Goldstone mode associated with particle transport between the layers. In particular, this implies that quasiparticles are confined between the layers. In the bulk, the resulting phase has the topological order of the Halperin 220 phase with U(1)_2 x U(1)_2 topological order, but it is realized in the symmetric/antisymmetric-basis of the layer index. Consequently, the edge spectrum at a fixed particle number reveals an unexpected U(1)_4 x U(1) structure.
Bühler, A., Lang, N., Kraus, C., Möller, G., Huber, S. and Büchler, H. (2014). Majorana modes and p-wave superfluids for fermionic atoms in optical lattices. Nature communications [Online] 5:4504. Available at: http://dx.doi.org/10.1038/ncomms5504.
The quest for realizations of non-Abelian phases of matter, driven by their possible use in fault-tolerant
topological quantum computing, has been spearheaded by recent developments in p-wave superconductors. The chiral p_x + i p_y-wave superconductor in two-dimensions exhibiting Majorana modes provides the simplest phase supporting non-Abelian quasiparticles and can be seen as the blueprint of fractional topological order. Alternatively, Kitaev's Majorana wire has emerged as an ideal toy model to understand Majorana modes. Here, we present a way to make the transition from Kitaev's Majorana wires to two-dimensional p-wave superconductors in a system with cold atomic gases in an optical lattice. The main idea is based on an approach to generate p-wave interactions by coupling orbital degrees of freedom with strong s-wave interactions. We demonstrate how this design can induce Majorana modes at edge dislocations in the optical lattice and we provide an experimentally feasible protocol for the observation of the non-Abelian statistics.
Scaffidi, T. and Möller, G. (2012). Adiabatic Continuation of Fractional Chern Insulators to Fractional Quantum Hall States. Physical Review Letters [Online] 109:246805. Available at: http://dx.doi.org/10.1103/PhysRevLett.109.246805.
We show how the phases of interacting particles in topological flat bands, known as fractional Chern insulators, can be adiabatically connected to incompressible fractional quantum Hall liquids in the lowest Landau-level of an externally applied magnetic field. Unlike previous evidence suggesting the similarity of these systems, our approach enables a formal proof of the equality of their topological orders, and furthermore this proof robustly extends to the thermodynamic limit. We achieve this result using the hybrid Wannier orbital basis proposed by Qi [Phys. Rev. Lett. 107, 126803 (2011)] in order to construct interpolation Hamiltonians that provide continuous deformations between the two models. We illustrate the validity of our approach for the groundstate of bosons in the half filled Chern band of the Haldane model, showing that it is adiabatically connected to the nu=1/2 Laughlin state of bosons in the continuum fractional quantum Hall problem.
Sterdyniak, A., Regnault, N. and Möller, G. (2012). Particle entanglement spectra for quantum Hall states on lattices. Physical Review B [Online] 86:165314. Available at: http://dx.doi.org/10.1103/PhysRevB.86.165314.
We use particle entanglement spectra to characterize bosonic quantum Hall states on lattices, motivated by recent studies of bosonic atoms on optical lattices. Unlike for the related problem of fractional Chern insulators, very good trial wavefunctions are known for fractional quantum Hall states on lattices. We focus on the entanglement spectra for the Laughlin state at nu=1/2 for the non-Abelian Moore-Read state at nu=1. We undertake a comparative study of these trial states to the corresponding groundstates of repulsive two-body or three-body contact interactions on the lattice. The magnitude of the entanglement gap is studied as a function of the interaction strength on the lattice, giving insights into the nature of Landau-level mixing. In addition, we compare the performance of the entanglement gap and overlaps with trial wavefunctions as possible indicators for the topological order in the system. We discuss how the entanglement spectra allow to detect competing phases such as a Bose-Einstein condensate.
Hormozi, L., Möller, G. and Simon, S. (2012). Fractional Quantum Hall Effect of Lattice Bosons Near Commensurate Flux. Physical Review Letters [Online] 108. Available at: http://dx.doi.org/10.1103/PhysRevLett.108.256809.
We study interacting bosons on a lattice in a magnetic field. When the number of flux quanta per plaquette is close to a rational fraction, the low energy physics is mapped to a multi-species continuum model: bosons in the lowest Landau level where each boson is given an internal degree of freedom, or \emph{pseudospin}.
We find that the interaction potential between the bosons involves terms that do not conserve pseudospin, corresponding to umklapp processes, which in some cases
can also be seen as BCS-type pairing terms. We argue that in experimentally realistic regimes for bosonic atoms in optical lattices with synthetic magnetic fields, these terms are crucial for determining the nature of allowed ground states. In particular, we show numerically that certain paired wave functions related to the Moore-Read Pfaffian state are stabilized by these terms, whereas certain other wave functions can be destabilized when umklapp processes become strong.
Möller, G. and Cooper, N. (2012). Correlated Phases of Bosons in the Flat Lowest Band of the Dice Lattice. Physical Review Letters [Online] 108. Available at: http://dx.doi.org/10.1103/PhysRevLett.108.045306.
We study correlated phases occurring in the flat lowest band of the dice lattice model at flux density one half. We discuss how to realize the
dice lattice model, also referred to as the $\mathcal{T}_3$ lattice, in cold atomic gases. We construct the projection of the model to the lowest
dice band, which yields a Hubbard-Hamiltonian with interaction-assisted hopping processes. We solve this model for bosons in two limits. In the
limit of large density, we use Gross-Pitaevskii mean-field theory to reveal time-reversal symmetry breaking vortex lattice phases. At low density,
we use exact diagonalization to identify three stable phases at fractional filling factors $\nu$ of the lowest band, including a
classical crystal at $\nu=1/3$, a supersolid state at $\nu=1/2$ and a Mott insulator at $\nu=1$.
Bonderson, P., Feiguin, A., Möller, G. and Slingerland, J. (2012). Competing Topological Orders in the ?=12/5 Quantum Hall State. Physical Review Letters [Online] 108:36806. Available at: http://dx.doi.org/10.1103/PhysRevLett.108.036806.
We provide numerical evidence that a p_x-i p_y paired Bonderson-Slingerland (BS) non-Abelian hierarchy state is a strong candidate for the observed ? =12/5 quantum Hall plateau. We confirm the existence of a gapped incompressible ?=12/5 quantum Hall state with shift S = 2 on the sphere, matching that of the BS state. The exact ground state of the Coulomb interaction at S = 2 is shown to have a large overlap with the BS trial wave function. Larger overlaps are obtained with BS-type wave functions that are hierarchical descendants of general p_x - i p_y weakly paired states at ?=12/5. We perform a finite-size scaling analysis of the ground-state energies for ?=12/5 states at shifts corresponding to the BS (S = 2) and 3-clustered Read-Rezayi (S = -2) universality classes. This analysis reveals very tight competition between these two non-Abelian topological orders.
Wójs, A., Möller, G. and Cooper, N. (2011). Search for non-Abelian statistics in half-filled Landau levels of graphene. Journal of Physics: Conference Series [Online] 334:12048. Available at: http://dx.doi.org/10.1088/1742-6596/334/1/012048.
We have employed large scale exact numerical diagonalization in Haldane spherical geometry in a comparative analysis of the correlated many-electron states in the half-filled low Landau levels of graphene and such conventional semiconductors as GaAs, including both spin and valley (i.e., pseudospin) degrees of freedom. We present evidence that the polarized Fermi sea of essentially non-interacting composite fermions remains stable against a pairing transition in both lowest Landau levels of graphene. However, it undergoes spontaneous depolarization, which in (ideal) graphene is unprotected for the lack of a single-particle pseudospin splitting. These results point to the absence of the non-Abelian Pfaffian phase in graphene.
Wójs, A., Sreejith, G., Möller, G., Töke, C. and Jain, J. (2011). Composite Fermion Description of the Excitations of the Paired Pfaffian Fractional Quantum Hall State. Acta Physical Polonica A [Online] 120:839-842. Available at: http://dx.doi.org/10.12693/APhysPolA.120.830.
We review the recently developed bi-partite composite fermion model, in the context of so-called Pfaffian incompressible quantum liquid with fractional and non-Abelian quasiparticle statistics, a promising model for describing the correlated many-electron ground state responsible for fractional quantum Hall effect at the Landau level filling factor ? = 5/2. We use the concept of composite fermion partitions to demonstrate the emergence of an essential ingredient of the non-Abelian braid statistics – the topological degeneracy of spatially indistinguishable configurations of multiple widely separated (non-interacting) quasiparticles.
Möller, G. and Simon, S. (2011). Trial Wavefunctions for the Goldstone Mode in ?=1/2+1/2 Quantum Hall Bilayers. Advances in Condensed Matter Physics [Online] 2011:815169. Available at: http://dx.doi.og/10.1155/2011/815169.
Based on the known physics of the excitonic superfluid or 111 state of the quantum Hall $\nu=1/2+1/2$ bilayer, we create a simple trial wavefunction ansatz for constructing a low energy branch of (Goldstone) excitations by taking the overall ground state and boosting one layer with respect to the other. This ansatz works extremely well for any interlayer spacing. For small $d$ this is simply the physics of the Goldstone mode, whereas for large $d$ this is a reflection of composite fermion physics. We find hints that certain aspects of composite fermion physics persist to low $d$ whereas certain aspects of Goldstone mode physics persist to high $d$. Using these results we show nonmonotonic behavior of the Goldstone mode velocity as a function of $d$.
Möller, G., Wójs, A. and Cooper, N. (2011). Neutral Fermion Excitations in the Moore-Read State at Filling Factor ?=5/2. Physical Review Letters [Online] 107:36803. Available at: http://dx.doi.org/10.1103/PhysRevLett.107.036803.
We present evidence supporting the weakly paired Moore-Read phase in the half-filled second Landau level, focusing on some of the qualitative features of its excitations. Based on numerical studies, we show that systems with odd particle number at the flux N_\phi=2N-3 can be interpreted as a neutral fermion mode of one unpaired fermion, which is gapped. The mode is found to have two distinct minima, providing a signature that could be observed by photoluminescence. In the presence of two quasiparticles the same neutral fermion excitation is shown to be gapless, confirming expectations for non-Abelian statistics of the Ising model with degenerate fusion channels 1 and \psi.
Wójs, A., Möller, G. and Cooper, N. (2011). Composite fermion dynamics in half-filled Landau levels of graphene. Acta Physica Polonica A [Online] 119:592. Available at: http://dx.doi.org/10.12693/APhysPolA.119.592.
We report on exact-diagonalization studies of correlated many-electron states in the half-filled Landau levels of graphene, including pseudospin (valley) degeneracy. We demonstrate that the polarized Fermi sea of non-interacting composite fermions remains stable against a pairing transition in the lowest two Landau levels. However, it undergoes spontaneous depolarization, which is unprotected owing to the lack of single-particle pseudospin splitting. These results suggest the absence of the Pfaffian phase in graphene.
Möller, G., Cooper, N. and Gurarie, V. (2011). Structure and consequences of vortex-core states in p-wave superfluids. Physical Review B: Condensed Matter and Materials Physics [Online] 83:14513. Available at: http://dx.doi.org/10.1103/PhysRevB.83.014513.
It is now well established that in two-dimensional chiral $p$-wave paired superfluids, the vortices carry zero-energy
modes which obey non-abelian exchange statistics and can potentially be used for topological quantum computation.
In such superfluids there may also exist other excitations below the bulk gap inside the cores of vortices.
We study the properties of these subgap states, and argue that their
presence affects the topological protection of the zero modes.
In conventional superconductors where the chemical potential is of the order of the Fermi energy
of a non-interacting Fermi gas, there is a large number of subgap states and the mini-gap
towards the lowest of these states is a small fraction of the Fermi energy. It is therefore difficult
to cool the system to below the mini-gap and at experimentally available temperatures, transitions
between the subgap states, including the zero modes, will occur and can alter the quantum
states of the zero-modes. Consequently, qubits defined uniquely in terms of the zero-modes
do not remain coherent.
We show that compound qubits involving the zero-modes and the parity of the occupation number
of the subgap states on each vortex are still well defined. However, practical schemes taking into
account all subgap states would nonetheless be difficult to achieve.
We propose to avoid this difficulty by working in the regime of small chemical potential $\mu$, near the transition
to a strongly paired phase, where the number of subgap states is reduced. We develop the theory to describe
this regime of strong pairing interactions and we show how the subgap states are ultimately absorbed into the bulk gap.
Since the bulk gap also vanishes as $\mu\to 0$ there is an optimum value $\mu_c$ which maximises the combined gap.
We propose cold atomic gases as candidate systems where the regime of strong interactions can be explored,
and explicitly evaluate $\mu_c$ in a Feshbach resonant $^{40}$K gas.
Möller, G. and Cooper, N. (2010). Condensed ground states of frustrated Bose-Hubbard models. Physical Review A [Online] 82:63625. Available at: http://dx.doi.org/10.1103/PhysRevA.82.063625.
We study theoretically the ground states of two-dimensional Bose-Hubbard models which are frustrated by gauge fields. Motivated by recent proposals for the implementation of optically induced gauge potentials, we focus on the situation in which the imposed gauge fields give rise to a pattern of staggered fluxes of magnitude ? and alternating in sign along one of the principal axes. For ?=1/2 this model is equivalent to the case of uniform flux per plaquette n?=1/2, which, in the hard-core limit, realizes the "fully frustrated" spin-1/2 XY model. We show that the mean-field ground states of this frustrated Bose-Hubbard model typically break translational symmetry. Given the presence of both a non-zero superfluid fraction and translational symmetry breaking, these phases are supersolid. We introduce a general numerical technique to detect broken symmetry condensates in exact diagonalization studies. Using this technique we show that, for all cases studied, the ground state of the Bose-Hubbard model with staggered flux ? is condensed, and we obtain quantitative determinations of the condensate fraction. We discuss the experimental consequences of our results. In particular, we explain the meaning of gauge invariance in ultracold-atom systems subject to optically induced gauge potentials and show how the ability to imprint phase patterns prior to expansion can allow very useful additional information to be extracted from expansion images.
Wójs, A., Möller, G., Simon, S. and Cooper, N. (2010). Skyrmions in the Moore-Read State at ?=5/2. Physical Review Letters [Online] 104:86801. Available at: https://doi.org/10.1103/PhysRevLett.104.086801.
We study spinful excitations in the Moore-Read state. Energetics of the skyrmion based on a spin-wave picture support the existence of skyrmion excitations in the plateau below $\nu=5/2$. This prediction is then tested numerically. We construct trial skyrmion wavefunctions for general FQHE states, and obtain significant overlaps for the predicted skyrmions of $\nu=5/2$. The case of $\nu=5/2$ is particularly interesting as skyrmions have twice the charge of quasiparticles (qp's). As the spin polarization of the system is tuned from full to none, we observe a transition between qp- and skyrmion-like behaviour of the excitation spectrum that can be interpreted as binding of qp's. Our ED results confirm that skyrmion states are energetically competitive with quasiparticles at low Zeeman coupling. Disorder and large density of quasiparticles are discussed as further mechanisms for depolarization.
Möller, G. and Moessner, R. (2009). Magnetic multipole analysis of kagome and artificial ice dipolar arrays. Physical Review B: Condensed Matter and Materials Physics [Online] 80:140409. Available at: http://dx.doi.org/10.1103/PhysRevB.80.140409.
We analyse an array of linearly extended monodomain dipoles forming square and kagome lattices. We find that its phase diagram contains two (distinct) finite-entropy kagome ice regimes - one disordered, one algebraic - as well as a low-temperature ordered phase. In the limit of the islands almost touching, we find a staircase of corresponding entropy plateaux, which is analytically captured by a theory based on magnetic charges. For the case of a modified square ice array, we show that the charges (`monopoles') are excitations experiencing two distinct Coulomb interactions: a magnetic `three-dimensional' one as well as a logarithmic `two dimensional' one of entropic origin.
Möller, G. and Cooper, N. (2009). Composite Fermion Theory for Bosonic Quantum Hall States on Lattices. Physical Review Letters [Online] 103:105303. Available at: http://dx.doi.org/10.1103/PhysRevLett.103.105303.
We study the groundstates of the Bose-Hubbard model in a uniform effective magnetic field, illustrating the physics of cold atomic gases on `rotating optical lattices'. Mapping the bosons to composite fermions leads to the prediction of quantum Hall fluids that have no counterpart in the continuum. We construct trial wavefunctions for these phases, and perform numerical tests of the predictions of the composite fermion model. Our results establish the existence of strongly correlated phases beyond those in the continuum limit, and provide evidence for a wider scope of the composite fermion approach beyond its application to the lowest Landau-level.
Papi?, Z., Möller, G., Milovanovic, M., Regnault, N. and Goerbig, M. (2009). Fractional quantum Hall state at ?=(1)/(4) in a wide quantum well. Physical Review B: Condensed Matter and Materials Physics [Online] 79:245325. Available at: http://dx.doi.org/10.1103/PhysRevB.79.245325.
We investigate, with the help of Monte-Carlo and exact-diagonalization calculations in the spherical geometry, several compressible and incompressible candidate wave functions for the recently observed quantum Hall state at the filling factor $\nu=1/4$ in a wide quantum well. The quantum well is modeled as a two-component system by retaining its two lowest subbands. We make a direct connection with the phenomenological effective-bilayer model, which is commonly used in the description of a wide quantum well, and we compare our findings with the established results at $\nu=1/2$ in the lowest Landau level. At $\nu=1/4$, the overlap calculations for the Halperin (5,5,3) and (7,7,1) states, the generalized Haldane-Rezayi state and the Moore-Read Pfaffian, suggest that the incompressible state is likely to be realized in the interplay between the Halperin (5,5,3) state and the Moore-Read Pfaffian. Our numerics shows the latter to be very susceptible to changes in the interaction coefficients, thus indicating that the observed state is of multicomponent nature.
Möller, G., Jolicoeur, T. and Regnault, N. (2009). Pairing in ultracold Fermi gases in the lowest Landau level. Physical Review A [Online] 79. Available at: http://dx.doi.org/10.1103/PhysRevA.79.033609.
We study a rapidly rotating gas of unpolarized spin-1/2 ultracold fermions in the two-dimensional regime when all atoms reside in the lowest Landau level. Due to the presence of the spin degree of freedom both s-wave and p-wave interactions are allowed at ultralow temperatures. We investigate the phase diagram of this system as a function of the filling factor in the lowest Landau level and in terms of the ratio between s- and p-wave interaction strengths. We show that the presence of attractive interactions induces a wide regime of phase separation with formation of maximally compact droplets that are either fully polarized or composed of spin-singlets. In the regime with no phase separation, we give evidence for fractional quantum Hall states. Most notably, we find two distinct singlet states at the filling nu=2/3 for different interactions. One of these states is accounted for by the composite fermion theory, while the other one is a paired state for which we identify two competing descriptions with different topological structures. This paired state may be an Abelian liquid of composite spin-singlet Bose molecules with Laughlin correlations. Alternatively, it may be a known non-Abelian paired state, indicated by good overlaps with the corresponding trial wave function. By fine tuning of the scattering lengths it is possible to create the non-Abelian critical Haldane-Rezayi state for nu = 1/2 and the permanent state of Moore and Read for nu=1. For purely repulsive interactions, we also find evidence for a gapped Halperin state at nu=2/5.
Möller, G., Simon, S. and Rezayi, E. (2009). Trial wave functions for ?=(1)/(2)+(1)/(2) quantum Hall bilayers. Physical Review B: Condensed Matter and Materials Physics [Online] 79:125106. Available at: http://dx.doi.og/10.1103/PhysRevB.79.125106.
Quantum Hall bilayer systems at filling fractions near $\nu=\half+\half$ undergo a transition from a compressible phase with strong intralayer correlation to an incompressible phase with strong interlayer correlations as the layer separation $d$ is reduced below some critical value. Deep in the intralayer phase (large separation) the system can be interpreted as a fluid of composite fermions (CFs), whereas deep in the interlayer phase (small separation) the system can be interpreted as a fluid of composite bosons (CBs). The focus of this paper is to understand the states that occur for intermediate layer separation by using variational wavefunctions. We consider two main classes of wavefunctions. In the first class, first discussed by PRL {\bf 77}, 3009 (1996), we consider interlayer BCS pairing of two independent CF liquids. We find that these wavefunctions are exceedingly good for $d \gtrsim \ell_0$ with $\ell_0$ the magnetic length. The second class of wavefunctions naturally follows the reasoning of PRL {\bf 91}, 046803 (2003) and generalizes the idea of pairing wavefunctions by allowing the CFs also to be replaced continuously by CBs. This generalization allows us to construct exceedingly good wavefunctions for interlayer spacings of $d \lesssim \ell_0$, as well. The accuracy of the wavefunctions discussed in this work, compared with exact diagonalization, is comparable to that of the celebrated Laughlin wavefunction. We conclude that over a range of $d$ there exists a phase of interlayer BCS-paired composite fermions. At smaller $d$, we find a second order transition to a composite boson liquid, known also as the 111 phase.
Möller, G., Simon, S. and Rezayi, E. (2008). Paired Composite Fermion Phase of Quantum Hall Bilayers at ?=(1)/(2)+(1)/(2). Physical Review Letters [Online] 101:176803. Available at: http://dx.doi.org/10.1103/PhysRevLett.101.176803.
We provide numerical evidence for composite fermion pairing in quantum Hall bilayer systems at filling nu=1/2+1/2 for intermediate spacing between the layers. We identify the phase as p_x+i p_y pairing, and construct high accuracy trial wave functions to describe the ground state on the sphere. For large distances between the layers, and for finite systems, a competing ''Hund's rule'' state, or composite fermion liquid, prevails for certain system sizes.
Möller, G. and Simon, S. (2008). Paired composite-fermion wave functions. Physical Review B: Condensed Matter and Materials Physics [Online] 77:75319. Available at: http://dx.doi.org/10.1103/PhysRevB.77.075319.
We construct a family of BCS paired composite fermion wavefunctions that generalize, but remain in the same topological phase as, the Moore-Read Pfaffian state for the half-filled Landau level. It is shown that for a wide range of experimentally relevant inter-electron interactions the groundstate can be very accurately represented in this form.
Möller, G. and Cooper, N. (2007). Density Waves and Supersolidity in Rapidly Rotating Atomic Fermi Gases. Physical Review Letters: Moving Physics Forward [Online] 99:190409. Available at: http://dx.doi.org/10.1103/PhysRevLett.99.190409.
We study theoretically the low-temperature phases of a two-component atomic Fermi gas with attractive s-wave interactions under conditions of rapid rotation. We find that, in the extreme quantum limit, when all particles occupy the lowest Landau level, the normal state is unstable to the formation of "charge" density wave (CDW) order. At lower rotation rates, when many Landau levels are occupied, we show that the low-temperature phases can be supersolids, involving both CDW and superconducting order.
Möller, G. and Moessner, R. (2006). Artificial Square Ice and Related Dipolar Nanoarrays. Physical Review Letters [Online] 96:237202. Available at: http://dx.doi.org/10.1103/PhysRevLett.96.237202.
We study a frustrated dipolar array recently manufactured lithographically by Wang et al. [Nature (London) 439, 303 (2006)] in order to realize the square ice model in an artificial structure. We discuss models for thermodynamics and dynamics of this system. We show that an ice regime can be stabilized by small changes in the array geometry; a different magnetic state, kagome ice, can similarly be constructed. At low temperatures, the square ice regime is terminated by a thermodynamic ordering transition, which can be chosen to be ferro- or antiferromagnetic. We show that the arrays do not fully equilibrate experimentally, and identify a likely dynamical bottleneck.
Möller, G., Matveenko, S. and Ouvry, S. (2006). Dimensional Reduction on a Sphere. International Journal of Modern Physics B [Online] 20:3533-3546. Available at: http://dx.doi.org/10.1142/S0217979206035503.
The question of the dimensional reduction of two-dimensional (2d) quantum models on a sphere to one-dimensional (1d) models on a circle is adressed. A possible application is to look at a relation between the 2d anyon model and the 1d Calogero-Sutherland model, which would allow for a better understanding of the connection between 2d anyon exchange statistics and Haldane exclusion statistics. The latter is realized microscopically in the 2d LLL anyon model and in the 1d Calogero model. In a harmonic well of strength ? or on a circle of radius R – both parameters ? and R have to be viewed as long distance regulators – the Calogero spectrum is discrete. It is well known that by confining the anyon model in a 2d harmonic well and projecting it on a particular basis of the harmonic well eigenstates, one obtains the Calogero-Moser model. It is then natural to consider the anyon model on a sphere of radius R and look for a possible dimensional reduction to the Calogero-Sutherland model on a circle of radius R. First, the free one-body case is considered, where a mapping from the 2d sphere to the 1d chiral circle is established by projection on a special class of spherical harmonics. Second, the N-body interacting anyon model is considered : it happens that the standard anyon model on the sphere is not adequate for dimensional reduction. One is thus lead to define a new spherical anyon-like model deduced from the Aharonov-Bohm problem on the sphere where each flux line pierces the sphere at one point and exits it at its antipode.
Möller, G. and Simon, S. (2006). Interlayer correlations versus intralayer correlations in a Quantum Hall bilayer at total filling one. Journal de Physique [Online] 131:283-284. Available at: http://dx.doi.org/10.1051/jp4:2005131072.
In Quantum Hall bilayers, at total filling factor one, a transition from a compressible phase with weak interlayer correlations to an incompressible phase with strong interlayer correlations is observed as the distance between the two layers is reduced. The transition between these two regimes can be understood using a trial wavefunction approach based on the composite particle picture.
Möller, G. and Simon, S. (2005). Composite fermions in a negative effective magnetic field: A Monte Carlo study. Physical Review B: Condensed Matter and Materials Physics [Online] 72:45344. Available at: http://dx.doi.org/10.1103/PhysRevB.72.045344.
The method of Jain and Kamilla [PRB 55, R4895 (1997)] allows numerical generation of composite fermion trial wavefunctions for large numbers of electrons in high magnetic fields at filling fractions of the form $\nu=p/(2mp+1)$ with $m$ and $p$ positive integers. In the current paper we generalize this method to the case where the composite fermions are in an effective (mean) field with opposite sign from the actual physical field, i.e. when $p$ is negative. We examine both the ground state energies and the low energy neutral excitation spectra of these states. Using particle-hole symmetry we can confirm the correctness of our method by comparing results for the series $m=1$ with $p>0$ (previously calculated by others) to our results for the conjugate series $m=1$ with $p <0$. Finally, we present similar results for ground state energies and low energy neutral excitations for the states with $m=2$ and $p <0$ which were not previously addressable, comparing our results to the $m=1$ case and the $p > 0$, $m=2$ cases.
Wójs, A., Möller, G., Simon, S. and Cooper, N. (2011). Skyrmions in a Half-Filled Second Landau Level. In: 30th International Conference on the Physics of Semiconductors. IOP Institute of Physics, pp. 631-632. Available at: http://dx.doi.org/10.1063/1.3666536.
We studied charged excitations of the ?=5/2 fractional quantum Hall state allowing for spin depolarization. It is generally accepted that the ground state is a spin?polarized incompressible quantum liquid, adiabatically connected to the Pfaffian state, whose spin?polarized quasiholes (QHs) obey non?Abelian statistics. Using numerical diagonalization and taking account of non?zero well widths we demonstrated that at a sufficiently low Zeeman energy it is energetically favorable for pairs of charge e/4 QHs to bind into charge e/2 Skyrmions. We showed that Skyrmion formation is further promoted by disorder, and argue that this can lead to a depolarized ground state in realistic experimental situations.
Möller, G., Wójs, A. and Cooper, N. (2009). Fractional Quantum Hall States with Non-Abelian Statistics. In: XXXVIII International School and Conference on the Physics of Semiconductors "Jaszowiec". Polish Academy of Sciences, pp. 847-848. Available at: http://przyrbwn.icm.edu.pl/APP/ABSTR/116/a116-5-22.html.
Using exact numerical diagonalization we have studied correlated many-electron ground states in a partially filled second Landau level. We consider filling fractions ? = 1/2 and 2/5, for which incompressible quantum liquids with non-Abelian anion statistics have been proposed. Our calculations include finite layer width, Landau level mixing and arbitrary deformation of the interaction pseudopotential. Computed energies, gaps, and correlation functions support the non-Abelian ground states at both ? = 1/2 ("Pfaffian") and ? = 2/5 ("parafermion" state).
Information obtained from the Kent Academic Repository, University of Kent's official repository of academic activity. Find out more about the Kent Academic Repository.
View all publications listed above in the Kent Academic Repository
Last updated 7 October 2019
Recruitment and admissions: Call us on +44 (0)1227 764000 All contacts
CT2 7NZ
All open days and visits
TEF Gold Award Social media links
Governance and planning
Access agreements
Business and partners
Paris Brussels Rome Athens
© University of Kent
We use cookies to improve your experience on our site. How do we use cookies? | CommonCrawl |
Earth and Environmental Sciences (2)
Journal of Fluid Mechanics (2)
Proceedings of the Royal Society of Edinburgh Section A: Mathematics (2)
Canadian Journal of Mathematics (1)
Journal of the Australian Mathematical Society (1)
Nagoya Mathematical Journal (1)
Ryan Test (2)
Canadian Mathematical Society (1)
Two general series identities involving modified Bessel functions and a class of arithmetical functions
Zeta and $L$-functions: analytic theory
Multiplicative number theory
Bruce C. Berndt, Atul Dixit, Rajat Gupta, Alexandru Zaharescu
Journal: Canadian Journal of Mathematics , First View
Published online by Cambridge University Press: 10 October 2022, pp. 1-31
We consider two sequences $a(n)$ and $b(n)$, $1\leq n<\infty $, generated by Dirichlet series
$$ \begin{align*}\sum_{n=1}^{\infty}\frac{a(n)}{\lambda_n^{s}}\qquad\text{and}\qquad \sum_{n=1}^{\infty}\frac{b(n)}{\mu_n^{s}},\end{align*} $$
satisfying a familiar functional equation involving the gamma function $\Gamma (s)$. Two general identities are established. The first involves the modified Bessel function $K_{\mu }(z)$, and can be thought of as a 'modular' or 'theta' relation wherein modified Bessel functions, instead of exponential functions, appear. Appearing in the second identity are $K_{\mu }(z)$, the Bessel functions of imaginary argument $I_{\mu }(z)$, and ordinary hypergeometric functions ${_2F_1}(a,b;c;z)$. Although certain special cases appear in the literature, the general identities are new. The arithmetical functions appearing in the identities include Ramanujan's arithmetical function $\tau (n)$, the number of representations of n as a sum of k squares $r_k(n)$, and primitive Dirichlet characters $\chi (n)$.
A model of tear-film breakup with continuous mucin concentration and viscosity profiles – CORRIGENDUM
Mohar Dey, Atul S. Vivek, Harish N. Dixit, Ashutosh Richhariya, James J. Feng
Journal: Journal of Fluid Mechanics / Volume 889 / 25 April 2020
Published online by Cambridge University Press: 28 February 2020, E1
Generalized Lambert series and arithmetic nature of odd zeta values
Diophantine approximation, transcendental number theory
Atul Dixit, Bibekananda Maji
Journal: Proceedings of the Royal Society of Edinburgh Section A: Mathematics / Volume 150 / Issue 2 / April 2020
Print publication: April 2020
It is pointed out that the generalized Lambert series $\sum\nolimits_{n = 1}^\infty {[(n^{N-2h})/(e^{n^Nx}-1)]} $ studied by Kanemitsu, Tanigawa and Yoshimoto can be found on page 332 of Ramanujan's Lost Notebook in a slightly more general form. We extend an important transformation of this series obtained by Kanemitsu, Tanigawa and Yoshimoto by removing restrictions on the parameters N and h that they impose. From our extension we deduce a beautiful new generalization of Ramanujan's famous formula for odd zeta values which, for N odd and m > 0, gives a relation between ζ(2m + 1) and ζ(2Nm + 1). A result complementary to the aforementioned generalization is obtained for any even N and m ∈ ℤ. It generalizes a transformation of Wigert and can be regarded as a formula for ζ(2m + 1 − 1/N). Applications of these transformations include a generalization of the transformation for the logarithm of Dedekind eta-function η(z), Zudilin- and Rivoal-type results on transcendence of certain values, and a transcendence criterion for Euler's constant γ.
A model of tear-film breakup with continuous mucin concentration and viscosity profiles
Journal: Journal of Fluid Mechanics / Volume 858 / 10 January 2019
Published online by Cambridge University Press: 06 November 2018, pp. 352-376
We propose an alternative to the prevailing framework for modelling tear-film breakup, which posits a layered structure with a mucus layer next to the cornea and an aqueous layer on top. Experimental evidence shows continuous variation of mucin concentration throughout the tear film, with no distinct boundary between the two layers. Thus, we consider a continuous-viscosity model that replaces the mucus and aqueous layers by a single liquid layer with continuous profiles of mucin concentration and viscosity, which are governed by advection–diffusion of mucin. The lipids coating the tear film are treated as insoluble surfactants as previously, and slip is allowed on the ocular surface. Using the thin-film approximation, we carry out linear stability analysis and nonlinear numerical simulations of tear-film breakup driven by van der Waals attraction. Results show that for the same average viscosity, having more viscous material near the ocular surface stabilizes the film and prolongs the breakup time. Compared with the layered models, the continuous-viscosity model predicts film breakup times that are in better agreement with experimental data. Finally, we also suggest a hydrodynamic explanation for how pathological loss of membrane-associated mucins may lead to faster breakup.
GENERALIZED LAMBERT SERIES, RAABE'S COSINE TRANSFORM AND A GENERALIZATION OF RAMANUJAN'S FORMULA FOR $\unicode[STIX]{x1D701}(2m+1)$
ATUL DIXIT, RAJAT GUPTA, RAHUL KUMAR, BIBEKANANDA MAJI
Journal: Nagoya Mathematical Journal / Volume 239 / September 2020
Print publication: September 2020
A comprehensive study of the generalized Lambert series $\sum _{n=1}^{\infty }\frac{n^{N-2h}\text{exp}(-an^{N}x)}{1-\text{exp}(-n^{N}x)},0<a\leqslant 1,~x>0$, $N\in \mathbb{N}$ and $h\in \mathbb{Z}$, is undertaken. Several new transformations of this series are derived using a deep result on Raabe's cosine transform that we obtain here. Three of these transformations lead to two-parameter generalizations of Ramanujan's famous formula for $\unicode[STIX]{x1D701}(2m+1)$ for $m>0$, the transformation formula for the logarithm of the Dedekind eta function and Wigert's formula for $\unicode[STIX]{x1D701}(1/N),N$ even. Numerous important special cases of our transformations are derived, for example, a result generalizing the modular relation between the Eisenstein series $E_{2}(z)$ and $E_{2}(-1/z)$. An identity relating $\unicode[STIX]{x1D701}(2N+1),\unicode[STIX]{x1D701}(4N+1),\ldots ,\unicode[STIX]{x1D701}(2Nm+1)$ is obtained for $N$ odd and $m\in \mathbb{N}$. In particular, this gives a beautiful relation between $\unicode[STIX]{x1D701}(3),\unicode[STIX]{x1D701}(5),\unicode[STIX]{x1D701}(7),\unicode[STIX]{x1D701}(9)$ and $\unicode[STIX]{x1D701}(11)$. New results involving infinite series of hyperbolic functions with $n^{2}$ in their arguments, which are analogous to those of Ramanujan and Klusch, are obtained.
THE FINITE FOURIER TRANSFORM OF CLASSICAL POLYNOMIALS
Nontrigonometric harmonic analysis
ATUL DIXIT, LIN JIU, VICTOR H. MOLL, CHRISTOPHE VIGNAT
Journal: Journal of the Australian Mathematical Society / Volume 98 / Issue 2 / April 2015
Published online by Cambridge University Press: 04 December 2014, pp. 145-160
The finite Fourier transform of a family of orthogonal polynomials is the usual transform of these polynomials extended by $0$ outside their natural domain of orthogonality. Explicit expressions are given for the Legendre, Jacobi, Gegenbauer and Chebyshev families.
Analogues of the general theta transformation formula
Atul Dixit
A new class of integrals involving the confluent hypergeometric function 1F1(a;c;z) and the Riemann Ξ-function is considered. It generalizes a class containing some integrals of Ramanujan, Hardy and Ferrar and gives, as by-products, transformation formulae of the form F(z, α) = F(iz, β), where αβ = 1. As particular examples, we derive an extended version of the general theta transformation formula and generalizations of certain formulae of Ferrar and Hardy. A one-variable generalization of a well-known identity of Ramanujan is also given. We conclude with a generalization of a conjecture due to Ramanujan, Hardy and Littlewood involving infinite series of the Möbius function.
Humidity Sensing Property of Zinc Oxide Film Deposited by PLD
Shobhna Dixit, K. C. Dubey, K. P. Mishra, Atul Srivastava, R. K. Shukla, Anchal Srivastava
Published online by Cambridge University Press: 01 February 2011, 1074-I05-27
This paper reports structural, morphological, optical and humidity sensing characteristics of pulsed laser deposited ZnO film. The XRD pattern reveals amorphous structure of the film. Scanning electron micrograph indicates formation of ZnO rods in micron size. Transmission increases gradually in the UV-VIS region. For studying the humidity sensing characteristics of the film, base of a right angled isosceles glass prism has been coated. Chopped light from a polarized He-Ne laser incident on the entry face of the prism gets reflected from the base – film – humid air interfaces and then emergent light is collected by the detector placed in front of the exit face of the prism. The least change in relative humidity which could be measured using the present configuration is 1.06RH%. Further the film is annealed at 400°C for four hours and its humidity sensing behavior is investigated in the similar manner which now shows a reversed trend. The sensitivity to humidity has decreased and the least change which could be detected now is 1.16RH%. | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Mixed-effects models using the normal and the Laplace distributions: A $\mathbf{2 \times 2}$ convolution scheme for applied research} \runtitle{Normal-Laplace convolutions}
\begin{aug} \author{\fnms{Marco} \snm{Geraci}\corref{}\thanksref{t1}\ead[label=e1]{[email protected]}}
\thankstext{t1}{Corresponding author: Marco Geraci, Department of Epidemiology and Biostatistics, Arnold School of Public Health, University of South Carolina, 915 Greene Street, Columbia SC 29209, USA. \printead{e1}}
\runauthor{M. Geraci}
\affiliation{University of South Carolina\thanksmark{t1}}
\end{aug}
\begin{abstract} \quad In statistical applications, the normal and the Laplace distributions are often contrasted: the former as a standard tool of analysis, the latter as its robust counterpart. I discuss the convolutions of these two popular distributions and their applications in research. I consider four models within a simple $2\times 2$ scheme which is of practical interest in the analysis of clustered (e.g., longitudinal) data. In my view, these models, some of which are less known than others by the majority of applied researchers, constitute a `family' of sensible alternatives when modelling issues arise. In three examples, I revisit data published recently in the epidemiological and clinical literature as well as a classic biological dataset. \end{abstract}
\begin{keyword}[class=MSC] \kwd[Primary ]{62F99} \kwd[; secondary ]{62J05} \end{keyword}
\begin{keyword} \kwd{Crohn's disease} \kwd{linear quantile mixed models} \kwd{meta-analysis} \kwd{multilevel designs} \kwd{random effects} \end{keyword}
\end{frontmatter}
\section{Introduction}\label{sec:1}
The normal (or Gaussian) distribution historically has played a prominent role not only as limiting distribution of a number of sample statistics, but also for modelling data obtained in empirical studies. Its probability density is given by \begin{equation}\label{eq:1} f_{N}(t) = \frac{1}{\sqrt{2\pi}\sigma} \exp \left\{ -\frac{1}{2}\left(\frac{t-\mu}{\sigma}\right)^2 \right\}, \end{equation} for $- \infty < t < \infty$. The Laplace (or double exponential) distribution, like the normal, has a long history in Statistics. However, despite being of potentially great value in applied research, it has never received the same attention. Its density is given by \begin{equation}\label{eq:2}
f_{L}(t) = \frac{1}{\sqrt{2}\sigma} \exp \left\{ -\sqrt{2}\left|\frac{t-\mu}{\sigma}\right| \right\}. \end{equation} Throughout this paper, these distributions will be denoted by $\mathcal{N}(\mu, \sigma)$ and $\mathcal{L}(\mu, \sigma)$, respectively.
In (\ref{eq:1}) and (\ref{eq:2}), $\mu$ and $\sigma$, where $-\infty < \mu < \infty$ and $\sigma > 0$, represent a location and a scale parameters, respectively. These two densities are shown in the left-hand side plots of Figure~\ref{fig:1}. The normal and Laplace distributions are both symmetric about $\mu$ and have variance equal to $\sigma^2$. As compared to the normal one, the Laplace density has a more pronounced peak (a characteristic technically defined \textit{leptokurtosis}) and fatter tails. Interestingly, the Laplace distribution can be represented as a scale mixture of normal distributions. Let $T \sim \mathcal{L}(\mu, \sigma)$, then \citep{kotz_2001} \begin{equation} \nonumber T \,{\buildrel d \over =}\, \mu + \sigma \sqrt{E}Z, \end{equation} where $E$ and $Z$ are independent standard exponential and normal variables, respectively. That is, the Laplace distribution emerges from heterogeneous normal sub-populations.
\begin{figure}
\caption{(a) Left: The normal (solid line) and double exponential (dashed line) densities. The location parameter is set to $0$ and the variance is set to $1$. (b) Right: The normal-normal (solid line), normal-Laplace (dashed line), and Laplace-Laplace (dot-dash line) densities. The location parameter is set to $0$ and the variance is set to $1$.}
\label{fig:1}
\end{figure}
Both laws were proposed by Pierre-Simon Laplace: the double exponential in 1774 and the normal in 1778 (for an historical account, see \citeauthor{wilson_1923}, \citeyear{wilson_1923}). At Laplace's time, the problem to be solved was that of estimating $\mu$ according to the linear model \begin{equation} \nonumber T = \mu + \sigma\,\varepsilon, \end{equation} where $\varepsilon$ denotes the error term. This problem was encountered, for example, in astronomy and geodesy, where $\mu$ represented the `true' value of a physical quantity to be estimated from experimental observations. It is well known that, under the Gaussian error law (\ref{eq:1}), the maximum likelihood estimate of $\mu$ is the sample mean but, under the double exponential error law (\ref{eq:2}), it is the sample median. The former is the minimiser of the least squares (LS) estimator, while the latter is the minimiser of the least absolute deviations (LAD) estimator.
The robustness of the LAD estimator in presence of large errors was known to Laplace himself. However, given the superior analytical tractability of the LS estimator (and therefore of the normal distribution), the mean regression model \begin{eqnarray*} T = x^{\top}\beta + \sigma\,\varepsilon, & \; \varepsilon \sim \mathcal{N}(0,1), \end{eqnarray*} quickly became the `standard' tool to study the association between the location parameter of $T$ (the response variable) and other variables of interest, $x$ (the covariates).
In the past few years, theoretical developments related to least absolute error regression \citep{bassett_koenker,koenker_bassett} have led to a renewed interest in the Laplace distribution and its asymmetric extension \citep{yu_zhang} as pseudo-likelihood for quantile regression models of which median regression is a special case \citep[see, among others,][]{yu_moyeed,yu_etal,geraci_bottai_2007}. In parallel, computational advances based on interior point algorithms have made LAD estimation a serious competitor of LS methods \citep{portnoy_koenker,koenker_ng}. Another reason for the `comeback' of the double exponential is related to its robustness properties which makes this distribution and distributions alike desirable in many applied research areas \citep{kozu_nada_2010}.
In statistical applications, the interest is often in processes where the source of randomness can be attributed to more than one `error' (a hierarchy of errors is also established). For instance, this is the case of longitudinal studies where part of the variation is attributed to an individual source of heterogeneity (often called `random effect'), say $\varepsilon_{1}$, independently from the noise, $\varepsilon_{2}$, i.e. \begin{equation} \nonumber T = \mu + \sigma_{1}\,\varepsilon_{1} + \sigma_{2}\,\varepsilon_{2}, \end{equation} where the distributions of $\varepsilon_{1}$ and $\varepsilon_{2}$ are often assumed to be symmetric about zero. It will be shown later that this model can be extended to include covariates associated with the parameter $\mu$ and the random effect $\varepsilon_{1}$. For now, it suffices to notice that the linear combination of random errors leads to the study of convolutions. So let us define a convolution \citep{mood_etal}. \begin{definition} If $U$ and $V$ are two independent, absolutely continuous random variables with density functions $f_{U}$ and $f_{V}$, respectively, and $T = U + V$, then \[ f_{T}(t) = f_{U+V}(t) = \int_{-\infty}^{\infty} f_{V}(t - u)f_{U}(u)\,du = \int_{-\infty}^{\infty} f_{U}(t - v)f_{V}(v)\,\rd v \] is the convolution of $f_{U}$ and $f_{V}$. \end{definition}
In Section \ref{sec:2}, I consider convolutions based on the normal and the Laplace distributions within a simple and practical $2 \times 2$ scheme. In Section \ref{sec:3}, I discuss inference when data are clustered, along with the implementation of estimation procedures using existing R \citep{R} software (further technical details are provided in Appendix, along with a simulation study). In Section \ref{sec:4}, I show some applications and, in Section \ref{sec:5}, conclude with final remarks.
\section{Convolutions}\label{sec:2} Let $Y$ be a real-valued random variable with absolutely continuous distribution function $F(y) = \Pr\left\{Y \leq y \right\}$ and density $f(y)\equiv F'(y)$. The variable $Y$ is observable and represents the focus of the analysis in specific applications (e.g., as the response variable in regression models). I consider four cases in which $Y$ results from one of the four convolutions reported in Table~\ref{tab:1}. The letters $\nu$ and $\lambda$ are used to denote normal and Laplace variates with densities (\ref{eq:1}) and (\ref{eq:2}), respectively. The subscripts 1 and 2 indicate, respectively, which of the two random variables plays the role of a random effect and which one is considered to be the noise. Here, the former may in general be associated with a vector of covariates and may represent an inferential quantity of interest; the latter is treated as a nuisance. Moreover, I assume independence between the components of the convolution throughout the paper.
\begin{table}[t!] \caption{$2 \times 2$ convolution scheme for independent Gaussian ($\nu$) and Laplacian ($\lambda$) random variables.} \centering \begin{tabular}{lccp{3cm}p{3cm}p{3cm}}
\toprule
& Normal & Laplace \\
\hline
Normal & $\nu_{1} + \nu_{2}$ (NN) & $\nu_{1} + \lambda_{2}$ (NL)\\
Laplace & $\lambda_{1} + \nu_{2}$ (LN) & $\lambda_{1} + \lambda_{2}$ (LL)\\
\hline \end{tabular}\label{tab:1} \end{table}
A few remarks about notation are needed. The shorthand $\mathrm{diag}(t)$ or $\mathrm{diag}(t_{1}, \ldots, t_{n})$, where $t = (t_{1},\ldots,t_{n})^{\top}$ is a $n \times 1$ vector, is used to denote the $n \times n$ diagonal matrix whose diagonal elements are the corresponding elements of $t$. The standard normal density and cumulative distribution functions will be denoted by $\phi$ and $\Phi$, respectively.
\subsection{Normal-normal (NN) convolution}\label{sec:2.1} The first convolution \begin{equation}\label{eq:3} Y = \nu_{1} + \nu_{2}, \end{equation} where $\nu_{1} \sim \mathcal{N}(0, \sigma_{1})$ and $\nu_{2} \sim \mathcal{N}(0, \sigma_{2})$, represents, in some respects, the simplest case among the four combinations defined in Table~\ref{tab:1}. Standard theory of normal distributions leads to \begin{equation}\label{eq:4} f_{NN}(y) = \frac{1}{\psi}\; \phi\left(\frac{y}{\psi}\right), \end{equation} where $\psi^2 \equiv \operatorname{var}(Y) = \sigma_{1}^2 + \sigma_{2}^2$.
Model (\ref{eq:3}) can be generalised to the regression model \begin{equation}\label{eq:5} Y = x^{\top}\beta + z^{\top}\nu_{1} + \nu_{2}, \end{equation} where $x$ and $z$ are, respectively, $p \times 1$ and $q \times 1$ vectors of covariates, and $\beta$ is a $p\times 1$ dimensional vector of regression coefficients. If $q > 1$, then I assume $\nu_1 \sim \mathcal{N}_{q}(0, \Sigma_{1})$, that is, a multivariate normal distribution with $q \times q$ variance-covariance matrix $\Sigma_{1}$. It follows that \begin{equation}\label{eq:6} g_{NN}(y) = \frac{1}{\psi}\; \phi\left(\frac{y - x^{\top}\beta}{\psi}\right), \end{equation} where $\psi^2 \equiv \operatorname{var}(Y) = z^{\top} \Sigma_{1} z + \sigma_{2}^2$.
Model~(\ref{eq:5}) is known as a linear mixed effects (LME) model or, simply, as a mixed model \citep{pinheiro_bates,demidenko_2013}. There is a vast number of applications of LME models, especially for the analysis of clustered data in the social, life and physical sciences.
\subsection{Normal-Laplace (NL) convolution}\label{sec:2.2}
The second convolution consists of a normal and a Laplace components, that is \begin{equation}\label{eq:7} Y = \nu_{1} + \lambda_{2}, \end{equation} where $\nu_{1} \sim \mathcal{N}(0, \sigma_{1})$ and $\lambda_{2} \sim \mathcal{L}(0, \sigma_{2})$. The resulting density is given by \citep{reed_2006} \begin{equation}\label{eq:8} f_{NL}(y) = \frac{1}{\sqrt{2}\sigma_{2}}\;\phi\left(y/\sigma_{1}\right)\left\{R\left(\kappa - y/\sigma_{1}\right) + R\left(\kappa + y/\sigma_{1}\right)\right\}, \end{equation} where $\kappa = \sqrt{2}\sigma_{1}/\sigma_{2}$ and $R$ is the Mills ratio \[ R(t) = \frac{1 - \Phi(t)}{\phi(t)}. \] The above distribution arises from a Brownian motion whose starting value is normally distributed and whose stopping hazard rate is constant. An extension of (\ref{eq:8}) to skewed forms can be obtained by letting $\lambda_{2}$ follow an asymmetric Laplace distribution \citep{reed_2006}. Applications of the NL convolution can be found in finance \citep{reed_2007,meintanis_2010}. See also the double Pareto-lognormal distribution, associated with $\exp(Y)$, which has applications in modeling size distributions \citep{reed_jorgensen}.
As in the previous section, I consider a generalisation of model (\ref{eq:7}) to the regression case \begin{equation}\label{eq:9} Y = x^{\top}\beta + z^{\top}\nu_{1} + \lambda_{2}. \end{equation} If $q > 1$, then I assume $\nu_1 \sim \mathcal{N}_{q}(0, \Sigma_{1})$. It follows that $z^{\top}\nu_{1}$ is normal with mean zero and variance $z^{\top} \Sigma_{1}z$. This leads to \begin{equation}\label{eq:10} g_{NL}(y) = \frac{1}{\sqrt{2}\sigma_{2}}\;\phi\left(\frac{y - x^{\top}\beta}{\sigma_{1}}\right)\left\{R\left(\kappa - \frac{y - x^{\top}\beta}{\sigma_{1}}\right) + R\left(\kappa + \frac{y - x^{\top}\beta}{\sigma_{1}}\right)\right\}, \end{equation} where $\sigma_{1} \equiv \sqrt{z^{\top} \Sigma_{1}z}$ and, as defined above, $\kappa = \sqrt{2}\sigma_{1}/\sigma_{2}$. It is easy to verify that $\operatorname{var}(Y) = \sigma_{1}^2 + \sigma_{2}^2$.
Model~(\ref{eq:9}) is a median regression model with normal random effects, a special case of the linear quantile mixed models (LQMMs) discussed by \cite{geraci_bottai_2007,geraci_bottai_2014}. LQMMs have been used in a wide range of research areas, including marine biology \citep{muir_etal_2015,duffy_etal_2015,barneche_2106}, environmental science \citep{fornaroli_etal_2015}, cardiovascular disease \citep{degerud_2014,blankenberg_2016}, physical activity \citep{ng,beets}, and ophthalmology \citep{patel_etal_2015,patel_etal_2016}.
\subsection{Laplace-normal (LN) convolution}\label{sec:2.3} The Laplace-normal convolution is given by \begin{equation}\label{eq:11} Y = \lambda_{1} + \nu_{2}, \end{equation} where $\lambda_{1} \sim \mathcal{L}(0, \sigma_{1})$ and $\nu_{2} \sim \mathcal{N}(0, \sigma_{2})$. The LN appears in robust meta-analysis \cite[p.266]{demidenko_2013}.
The LN convolution in (\ref{eq:11}), clearly, is the same as the NL convolution in (\ref{eq:7}) (so I omit writing its density). However, note that now the Laplace component is associated with the random effect, not with the error term; therefore, the two scale parameters $\sigma_{1}$ and $\sigma_{2}$ will appear swapped. The distinction becomes clear when considering the regression model \begin{equation}\label{eq:12} Y = x^{\top}\beta + z^{\top}\lambda_{1} + \nu_{2}. \end{equation} By analogy with the NL convolution, I assume that, for $q > 1$, $\lambda_{1}$ has a $q$-dimensional multivariate Laplace distribution \cite[p.235]{kotz_2001}. \begin{definition} An $n$-dimensional random variable $T$ is said to follow a zero-centred multivariate Laplace distribution with parameter $\Sigma$, $T \sim \mathcal{L}_{q}(0,\Sigma)$, if its density is given by \[
f_{L}(t) = 2(2\pi)^{-n/2}\;\left|\Sigma\right|^{-1/2}\left(t^{\top} \Sigma^{-1} t/2\right)^{\omega/2}K_{\omega}\left(\sqrt{2t^{\top} \Sigma^{-1} t}\right), \] where $\Sigma$ is an $n \times n$ nonnegative definite symmetric matrix, $\omega = (2-n)/2$ and $K_{\omega}$ is the modified Bessel function of the third kind. \end{definition}
\begin{remark} If $T \sim \mathcal{L}_{q}(0,\Sigma)$, then $\operatorname{cov}(t) = \Sigma$ \cite[p.249]{kotz_2001}. For a diagonal matrix $\Sigma = \mathrm{diag}(\varsigma_{1}, \ldots, \varsigma_{q})$, the coordinates of the multivariate Laplace are uncorrelated, but not independent. Therefore, the joint distribution of $n$ independent univariate Laplace variates does not have the properties of the multivariate Laplace with diagonal variance-covariance matrix. \end{remark}
For $n=1$, the multivariate density defined above reduces to the univariate density (\ref{eq:2}) with $\sigma = \Sigma^{1/2}$. Moreover, a linear combination of the coordinates of the multivariate Laplace is still a Laplace \cite[p.255]{kotz_2001}. Indeed, if we assume $\lambda_{1} \sim \mathcal{L}_{q}(0,\Sigma_{1})$, then $z^{\top}\lambda_{1} \sim \mathcal{L}(0,\sigma_{1})$, where $\sigma_{1} = \sqrt{z^{\top} \Sigma_{1} z}$. Thus, the density of $Y$ in Equation (\ref{eq:12}) is given by \begin{equation}\label{eq:13} g_{LN}(y) = \frac{1}{\sqrt{2}\sigma_{1}}\;\phi\left(\frac{y - x^{\top}\beta}{\sigma_{2}}\right)\left\{R\left(\kappa - \frac{y - x^{\top}\beta}{\sigma_{2}}\right) + R\left(\kappa + \frac{y - x^{\top}\beta}{\sigma_{2}}\right)\right\}, \end{equation} where $\kappa = \sqrt{2}\sigma_{2}/\sigma_{1}$. Again, it is easy to verify that $\operatorname{var}(Y) = \sigma_{1}^2 + \sigma_{2}^2$.
\subsection{Laplace-Laplace (LL) convolution}\label{sec:2.4} The fourth and last convolution consists of two Laplace variates, i.e. \begin{equation}\label{eq:14} Y = \lambda_{1} + \lambda_{2}, \end{equation} where $\lambda_{1} \sim \mathcal{L}(0, \sigma_{1})$ and $\lambda_{2} \sim \mathcal{L}(0, \sigma_{2})$. It can be shown \cite[p.35]{kotz_2001} that the density of $Y$ is \begin{equation}\label{eq:15} f_{LL}(y) = \begin{cases}
\dfrac{1}{4}s (1 + s|y|)\exp(-s|y|), & \text{if $s_{1} = s_{2} = s,$}\\
\dfrac{\kappa}{2\kappa^2 - 2} \left\{s_{1}\exp(-s_{2}|y|) - s_{2}\exp(-s_{1}|y|)\right\}, & \text{if $s_{1}/s_{2} = \kappa \neq 1,$} \end{cases} \end{equation} with $s_{1} = \sigma_{1}/\sqrt{2}$ and $s_{2} = \sigma_{2}/\sqrt{2}$.
For the regression model \begin{equation}\label{eq:16} Y = x^{\top}\beta + z^{\top}\lambda_{1} + \lambda_{2}, \end{equation} with $\lambda_{1} \sim \mathcal{L}_{q}(0,\Sigma_1)$, I obtain \begin{equation}\label{eq:17} g_{LL}(y) = \begin{cases}
\dfrac{1}{4}s (1 + s|y - x^{\top}\beta|)\exp(-s|y - x^{\top}\beta|), & \text{if $s_{1} = s_{2} = s,$}\\
\dfrac{\kappa}{2\kappa^2 - 2} \left\{s_{1}\exp(-s_{2}|y - x^{\top}\beta|) \right.& \\
\left. \qquad - s_{2}\exp(-s_{1}|y - x^{\top}\beta|)\right\}, & \text{if $s_{1}/s_{2} = \kappa \neq 1.$} \end{cases} \end{equation} with $s_{1} = \sigma_{1}/\sqrt{2}$ and $\sigma_{1} = \sqrt{z^{\top} \Sigma_{1} z}$. The variance is given by $\operatorname{var}(Y) = \sigma_{1}^2 + \sigma_{2}^2$.
Model~(\ref{eq:16}) is a median regression model with `robust' random effects, another special case of LQMMs \citep{geraci_bottai_2014}.
\subsection{Some properties}\label{sec:2.5} All the convolutions are symmetric, unimodal, twice differentiable and have continuous first and second derivatives (the NN and NL are also smooth). Also, they are log-concave since both the normal (\ref{eq:1}) and Laplace (\ref{eq:2}) densities are log-concave \citep{prekopa_1973}. The right-hand side plots of Figure~\ref{fig:1} shows that, as compared to the NN density, the NL (LN) and LL densities are leptokurtic and have more weight in the tails, with the NL density sitting between the NN and LL distributions. Thus, the presence of the Laplace term in the convolution confers different degrees of robustness to the model depending on whether one or both random terms are assumed to be Laplacian. Also, notice that the marginal regression models are location--scale-shift models, since both the location and the scale of $Y$ are functions of the covariates.
\section{Inference}\label{sec:3}
In this section, I briefly discuss inferential issues, with detailed mathematical derivations provided in Appendix.
Let $Y_{i} = (Y_{i1}, Y_{i2}, \ldots, Y_{in_{i}})^{\top}$ be a multivariate $n_{i} \times 1$ random response vector, and $x_{ij}$ and $z_{ij}$, be vectors of covariates for the $j$th observation, $j = 1, \ldots, n_{i}$, in cluster $i$, $i = 1, \ldots, M$. Each component of $Y_{i}$ can be modelled using any of the convolutions discussed in Section \ref{sec:2} by assuming \begin{equation} \nonumber Y_{ij} = x_{ij}^{\top}\beta + z_{ij}^{\top}\varepsilon_{1i} + \varepsilon_{2ij}, \end{equation} where the random effect $\varepsilon_{1i}$ and the error term $\varepsilon_{2ij}$ are either Gaussian or Laplacian according to the scheme in Table \ref{tab:1}. The marginal models implied by these four convolutions have been defined in expressions (\ref{eq:6}), (\ref{eq:10}), (\ref{eq:13}), and (\ref{eq:17}). At the cluster level, I use the notation \begin{equation}\label{eq:18} Y_{i} = X_{i}\beta + Z_{i}\varepsilon_{1i} + \varepsilon_{2i}, \end{equation} where $X_{i}$ and $Z_{i}$ are, respectively, $n_{i} \times p$ and $n_{i} \times q$ design matrices. I assume that the vector of random effects $\varepsilon_{1i}$ has variance-covariance matrix $\Sigma_{1}$ for all $i = 1, \ldots, M$ and that the $Y_{i}$'s are independent of one another. The structure of $\Sigma_1$ is, for the moment, left unspecified. Also, I assume that $\mathrm{cov}(\varepsilon_{2i})$ is a multiple of the identity matrix, although this assumption can be easily relaxed (see Section \ref{sec:3.4}).
There are several approaches to mixed effects model estimation \citep[see, for example,][]{pinheiro_bates,demidenko_2013}, each approach having its own advantages and disadvantages. One approach is to work with the marginal likelihood of $Y_{i}$. Although independence between clusters can still be assumed, in general the $Y_{ij}$'s will be correlated within the same cluster. Therefore, parameter estimation based on the marginal likelihood requires knowing the joint distribution of $Y_{i1}, Y_{i2}, \ldots, Y_{in_{i}}$. Under the NN convolution, $Y_{i}$ is known to be multivariate normal. It is beyond the scope of this paper to derive the multivariate distribution of $Y_{i}$ for the NL, LN and LL convolutions.
Likelihood-based estimation of location and scale parameters using the NN model has been largely studied. Therefore, I will focus on the NL, LN, and LL models. Since an important aspect in applied research is the availability of software to perform data analysis, here I consider two methods which can be applied using existing software. The first method is based on numerical integration and applies to specific NL and LL models, while the other method is based on a Monte Carlo Expectation-Maximisation (MCEM) algorithm and applies to NL, LN and LL models.
\subsection{Numerical integration}\label{sec:3.1}
Let the $i$th contribution to the marginal log-likelihood be \begin{equation} \nonumber \ell(\beta, \Sigma_1, \sigma_2; y_{i}) = \log \int_{\mathbb{R}^{q}} g\left(y_{i} - X_{i}\beta - Z_{i}u_{i}\right) h(u_{i}) \,\rd u_{i}, \end{equation} where $g$ denotes the density of the error term conditional on the random effect $u_i$ and $h$ denotes the density of the random effect. One can work with the numerically integrated likelihood \begin{equation}\label{eq:19} \tilde{\ell}(\beta, \Sigma_1, \sigma_2; y_{i}) = \log \sum_{k_{1}}^{K}\cdots\sum_{k_{q}}^{K} g\left(y_{i} - X_{i}\beta - Z_{i}\left(\Sigma_{1}\right)^{1/2}v_{k_{1},\ldots,k_{q}}\right) h(v_{k_{1},\ldots,k_{q}}), \end{equation} with nodes $v_{k_{1},\ldots,k_{q}} = (v_{k_{1}}, \ldots, v_{k_{q}})^{\top}$ and weights $h(v_{k_{1},\ldots,k_{q}})$, $k_{l} = 1,\ldots,K$, $l = 1, \ldots, q$, as an approximation to the marginal log-likelihood.
The maximisation of the approximate log-likelihood (\ref{eq:19}) can be time-consuming depending on the dimension of the quadrature $q$, the required accuracy of the approximation controlled by the number of nodes $K$, and, of course, the distribution $h(u)$. If $\Sigma_1$ is a diagonal matrix, then $h(v_{k_{1},\ldots,k_{q}})=\prod_{l=1}^{q} h(v_{k_{l}})$. This greatly simplifies calculations since the $q$-dimensional integral can be carried out with $q$ successive applications of one-dimensional quadrature rules. In the multivariate normal case, a non-diagonal covariance matrix can be rescaled to a diagonal one and the joint density factorises into $q$ normal variates. However, this is not the case for the multivariate Laplace, at least not for the one defined in Section~\ref{sec:2.3}. \cite{geraci_bottai_2014} considered a steepest-descent approach combined with Gauss-Hermite and Gauss-Laguerre quadrature for, respectively, the NL and LL likelihoods. Standard errors were obtained by bootstrapping the clusters (block bootstrap).
Since \citeauthor{geraci_2014}'s (\citeyear{geraci_2014}) algorithms, which are implemented in the R package \texttt{lqmm}, can be applied to selected models only (namely, NL models with correlated or uncorrelated random effects and LL models with uncorrelated random effects), in the next section I develop an alternative, more general approach based on the EM algorithm.
\subsection{EM estimation}\label{sec:3.2}
Rather than working with the Laplace distribution directly, I consider its representation as a scale mixture of normal distributions. As noted before, if $T \sim \mathcal{L}(0, \sigma)$, then $T \,{\buildrel d \over =}\, \sigma\sqrt{W}V$, where $W$ and $V$ are, respectively, independent standard exponential and normal variates. This equivalence has been used in EM estimation of regression quantiles \citep[see, for example,][]{lum_gelfand,tian_etal_2013}. Similarly, in the multivariate case, if $T \sim \mathcal{L}_{q}(0,\Sigma_1)$, then $T \,{\buildrel d \over =}\, \sqrt{W}V$, where $W$ is, again, standard exponential and $V \sim \mathcal{N}_{q}(0, \Sigma_{1})$. As shown in Appendix, the normal components in the scale mixture representation of the NL, LN, and LL models can be easily convolved (conditionally on $W$) and the resulting log-likelihood for the $i$th cluster becomes \begin{equation}
\nonumber \ell\left(\beta, \Sigma_1, \sigma_2; y_{i},w_{i}\right) = \log g\left(y_{i}|w_{i}\right) + \log h(w_{i}), \end{equation} where $g$ is multivariate normal and $h$ is standard exponential.
The proposed EM algorithm starts from the likelihood of the complete data $(y_{i},w_{i})$, where $w_{i}$ represents the unobservable data. In the E-step, the expected value of the complete log-likelihood is approximated using a Monte Carlo expectation. As shown in expression~\eqref{eq:A.6} in Appendix, the M-step reduces to the maximum likelihood estimation of a linear mixed model with prior weights which can be carried out using fitting routines from existing software (e.g., \texttt{nlme} or \texttt{lme4} in R).
\subsection{Modelling and estimation of $\Sigma_1$}\label{sec:3.3} There are different possible structures for $\Sigma_1$. The simplest is a multiple of the identity matrix, with constant diagonal elements and zero off-diagonal elements. Other structures include, for example, diagonal (variance components), compound symmetric (constant diagonal and constant off-diagonal elements), and the more general symmetric positive-definite matrix. These are all available in the \texttt{nlme} \citep{pinheiro_2014}, \texttt{lme4} \citep{bates_2015} and \texttt{lqmm} \citep{geraci_2014} packages, as well as in SAS procedures for mixed effects models.
The variance-covariance matrix of the random effects, whether normal or Laplace, must be nonnegative definite. However, it is possible that, during MLE, the estimate $\hat{\Sigma}_1$ may be singular or veer off into the space of negative definite matrices. This problem does not occur in EM estimation if the starting matrix is nonnegative definite. However, the monotonicity property is lost when a Monte Carlo error is introduced at the E-step \citep{mclachlan_2008}. There are at least three approaches one can consider \citep[p.88]{demidenko_2013}: (i) allow $\hat{\Sigma}_1$ to be negative definite during estimation and, if negative definite at convergence, replace it with a nonnegative definite matrix after the algorithm has converged; (ii) constrained optimisation; (iii) matrix reparameterisation \citep{pinheiro_bates}. As discussed in Appendix, I follow the latter approach.
\subsection{Residual heteroscedasticity and correlation}\label{sec:3.4}
The development of the EM algorithm discussed above is based on the assumption that the within-group errors are independent with common scale parameter $\sigma_{2}$. As briefly outlined in Section~\ref{sec:A.6} in Appendix, it is easy to extend the NL, LN, and LL models to the case of heteroscedastic and correlated errors. Commonly available mixed effects software provide capabilities for estimating residual variance and correlation parameters. For the sake of simplicity, I do not consider this extension any further in this paper.
\section{Examples}\label{sec:4} \subsection{Meta-analysis}\label{sec:4.1}
Here, I discuss an application in meta-analysis. The data consist of mean standard deviation scores of height at diagnosis in osteosarcoma patients which had been reported in five different studies (Figure~\ref{fig:2}) and were successively meta-analysed by \cite{arora_2011}. Let $Y$ denote the study-specific effect. For these data, I considered the NN model \citep{dersimonian_laird} \begin{eqnarray*} Y_{i} = \mu + \nu_{1i} + \nu_{2i}, & i = 1,\ldots,5, \end{eqnarray*} where $\nu_{1i} \sim \mathcal{N}(0, \tau)$ and $\nu_{2i} \sim \mathcal{N}(0, \sigma_{i})$, and the LN model \citep{demidenko_2013} \begin{eqnarray*} Y_{i} = \mu + \lambda_{1i} + \nu_{2i}, & i = 1,\ldots,5, \end{eqnarray*} where $\lambda_{1i} \sim \mathcal{L}(0, \tau)$ and $\nu_{2i} \sim \mathcal{N}(0, \sigma_{i})$.
\begin{figure}
\caption{Forest plot for five studies on the relationship between height at diagnosis and osteosarcoma in young people. Each study is represented by a block, with area proportional to its weight, centered at the effect point estimate. Horizontal grey lines depict $95\%$ confidence intervals.}
\label{fig:2}
\end{figure}
In meta-analysis, the goal is to estimate an `overall' or `pooled' effect ($\mu$) and the between-study variance or heterogeneity among study-specific effects ($\tau^2$). The sampling variances $\sigma_{i}^2$ are assumed to be known.
Estimation for the osteosarcoma data was carried out using R software developed for standard \citep{viechtbauer_2010} and robust \citep{demidenko_2013} meta-analysis. The estimates (standard errors) of $\mu$ and $\tau^2$ were, respectively, $0.260$ (0.087) and $0.029$ (0.027) for the NN model, and $0.246$ (0.073) and $0.021$ (0.033) for the LN model. The larger estimated overall effect and heterogeneity for the NN model are a consequence of the outlying effect size of study 5 (Figure~\ref{fig:2}) which skews the location $\mu$ and inflates the scale of the normal distribution. In contrast, the Laplace distribution is more robust to outliers and heavy tails. Indeed, the estimate of $\mu$ from the LN model was more precise as demonstrated by the lower standard error (as a consequence, the related test statistic has smaller $p$-value). A similar example is described by \cite{demidenko_2013}.
\subsection{Repeated measurements in clinical trials}\label{sec:4.2}
Ten Crohn's disease patients with endoscopic recurrence were followed over time \citep{sorrentino_etal_2010}. Colonoscopy was performed and surrogate markers of disease activity were collected on four occasions. One of the goals of this trial was to assess the association between fecal calprotectin (FC -- mg/kg) and endoscopic score (ES -- Rutgeerts). The data were analysed using a log-linear median regression model under the assumption of independence between measurements \citep{sorrentino_etal_2010}. Here, I take the within-patient correlation into account and analyse the data using three of the four regression models discussed in Section~\ref{sec:2}: the NN model \begin{eqnarray*} \log Y_{ij} = \beta_{0} + \beta_{1}x_{ij} + \nu_{1i} + \nu_{2ij}, & \; j = 1, \ldots, 4, & \; i = 1,\ldots,10, \end{eqnarray*} the NL model \begin{eqnarray*} \log Y_{ij} = \beta_{0} + \beta_{1}x_{ij} + \nu_{1i} + \lambda_{2ij}, & \; j = 1, \ldots, 4, & \; i = 1,\ldots,10, \end{eqnarray*} and the LL model \begin{eqnarray*} \log Y_{ij} = \beta_{0} + \beta_{1}x_{ij} + \lambda_{1i} + \lambda_{2ij}, & \; j = 1, \ldots, 4, & \; i = 1,\ldots,10, \end{eqnarray*} where $Y_{ij}$ and $x_{ij}$ denote, respectively, FC and ES measurements on patient $i$ at occasion $j$, $\nu_{1i} \sim \mathcal{N}(0, \tau)$, $\nu_{2ij} \sim \mathcal{N}(0, \sigma)$, $\lambda_{1i} \sim \mathcal{L}(0, \tau)$, and $\lambda_{2ij} \sim \mathcal{L}(0, \sigma)$. Therefore, the variance of the random effects is $\tau^2$, while the variance of the error term is $\sigma^2$.
\begin{table}[ht!] \caption{Association between fecal calprotectin and endoscopic score in Crohn's disease patients. Estimates and standard errors (SE) of the fixed effects ($\beta$), variance of the random effects ($\tau^2$), and intra-class correlation ($\rho$) from three models. The log-likelihood ($\ell$) is reported in brackets.} \centering \begin{tabular}{lrrrr}
\toprule
& $\beta_{0}$ & $\beta_{1}$ & $\tau^2$ & $\rho$ \\
\hline \multicolumn{5}{l}{\textit{Normal-Normal} ($\ell = -21.8$)}\\ Estimate & 3.293 & 0.910 & 0.031 & 0.191 \\ SE & 0.113 & 0.056 & 0.133 & \\ \multicolumn{5}{l}{\textit{Normal-Laplace} ($\ell = -22.2$)}\\ Estimate & 3.354 & 0.871 & 0.994 & 0.877 \\ SE & 0.135 & 0.051 & 0.046 & \\ \multicolumn{5}{l}{\textit{Laplace-Laplace} ($\ell = -14.2$)}\\ Estimate & 3.269 & 0.905 & 0.293 & 0.757 \\ SE & 0.114 & 0.035 & 0.053 & \\ \hline \end{tabular}\label{tab:2} \end{table}
In this case, the parameters of interest are the slope $\beta_1$ and the intra-class correlation $\rho = \tau^2/(\tau^2 + \sigma^2)$, which measures how much of the total variance is due to between-individual variability. Estimation was carried out using the \texttt{nlme} \citep{pinheiro_2014} and \texttt{lqmm} \citep{geraci_2014} packages. The results are shown in Table~\ref{tab:2}. The estimates of the regression coefficients $\beta$ tallied across models. However, the estimates of $\tau^2$ and $\rho$ differed substantially, with values from the NN model much lower than those from the NL and LL models. First-level residuals (i.e., predictions of the random effects plus the error term) and second-level residuals (i.e., predictions of the error term only) from the NN model are shown in Figure~\ref{fig:3}. It is apparent that $\sigma^2$ may be inflated by an unusual second-level residual, to the detriment of $\tau^2$. As a consequence, the intra-class correlation appeared to be heavily underestimated by the NN model. The NL model improved upon the estimation of the scale parameters as it is more robust to outliers in the error term. However, the LL model gave the largest value of the log-likelihood, suggesting that the goodness of the fit is further improved by using a robust distribution for the random effects as well. Note also that the standard error of the slope was smallest for the LL model.
\begin{figure}
\caption{QQ-plot of the first-level (left plot) and second-level (right plot) residuals from the normal-normal model for the Crohn's disease data.}
\label{fig:3}
\end{figure}
\subsection{Growth curves}\label{sec:4.3}
In a weight gain experiment, 30 rats were randomly assigned to three treatment groups: treatment 1, a control (no additive); treatments 2 and 3, which consisted of two different additives (thiouracil and thyroxin respectively) to the rats drinking water \citep{box_1950}. Weight (grams) of the rats was measured at baseline (week 0) and at weeks 1, 2, 3, and 4. Data on three of the 10 rats from the thyroxin group were subsequently removed due to an accident at the beginning of the study. Figure~\ref{fig:4} shows estimated intercepts and slopes obtained from rat-specific LS regressions of the type \begin{eqnarray*} Y_{ij,k} = \beta_{0i,k} + \beta_{1i,k}x_{j} + \sigma_{i,k} \varepsilon_{ij,k}, & \; \varepsilon_{ij,k} \sim \mathcal{N}(0,1), \end{eqnarray*} where the response $Y_{ij,k}$ is weight measurement taken on rat $i = 1, \dots, M_{k}$ on occasion $j = 1, \ldots, 5$ conditional on treatment group $k = 1, 2, 3$, and $x_{j} = j - 1$. (Note that $M_{1} = M_{2} = 10$ and $M_{3} = 7$.) It is evident that the weight of rats treated with thiouracil grew slower than the controls', though at baseline the former tended to be heavier than the latter. In contrast, rats in the control and thyroxin groups had, on average, similar intercepts and slopes.
\begin{figure}
\caption{Ordinary least squares estimates of intercepts and slopes for individual growth curves in the rats weight gain data. The scatterplots on the top show the pairwise estimates with LOESS smoothing superimposed (dashed grey lines mark mean values). The plots on the bottom depict the estimated densities of intercepts (solid line) and slopes (dashed line) centred and scaled using their respective means and standard deviations.}
\label{fig:4}
\end{figure}
The Pearson's correlation coefficients of the estimated intercept-slope pairs $(\hat{\beta}_{0i,k},\hat{\beta}_{1i,k})$ gave $-0.26$ ($k = 1$), $-0.37$ ($k = 2$), and $-0.16$ ($k = 3$), suggesting a negative association between baseline weight and growth rate in all treatment groups. However, the direction of the association in treatment group 3 is unclear. Interestingly, the Kendall rank correlation coefficient in the thyroxin group indicated a weak positive association ($0.05$), while the Pearson's coefficient became strongly positive ($0.97$) after removing the two pairs with the largest slopes. Moreover, the distributions of intercepts and slopes showed the presence of skewness and bimodality. Therefore, some degree of robustness against departures from normality might be needed.
To model the heterogeneity within each treatment group, subject-specific random intercepts and slopes were included in the following four models: the NN model \begin{align*} Y_{ij,k} = \beta_{0,k} + \beta_{1,k}x_{j} + \nu^{(1)}_{1i,k} &+ \nu^{(2)}_{1i,k}x_{j} + \nu_{2ij,k}, \\ & \quad j = 1, \ldots, 5, \quad i = 1,\ldots,M_{k}, \quad k = 1,2,3, \end{align*} the NL model \begin{align*} Y_{ij,k} = \beta_{0,k} + \beta_{1,k}x_{j} + \nu^{(1)}_{1i,k} &+ \nu^{(2)}_{1i,k}x_{j} + \lambda_{2ij,k},\\ & \quad j = 1, \ldots, 5, \quad i = 1,\ldots,M_{k}, \quad k = 1,2,3, \end{align*} the LN model \begin{align*} Y_{ij,k} = \beta_{0,k} + \beta_{1,k}x_{j} + \lambda^{(1)}_{1i,k} &+ \lambda^{(2)}_{1i,k}x_{j} + \nu_{2ij,k},\\ & \quad j = 1, \ldots, 5, \quad i = 1,\ldots,M_{k}, \quad k = 1,2,3, \end{align*} and the LL model \begin{align*} Y_{ij,k} = \beta_{0,k} + \beta_{1,k}x_{j} + \lambda^{(1)}_{1i,k} &+ \lambda^{(2)}_{1i,k}x_{j} + \lambda_{2ij,k},\\ & \quad j = 1, \ldots, 5, \quad i = 1,\ldots,M_{k}, \quad k = 1,2,3, \end{align*} where I assumed $\left(\nu^{(1)}_{1i,k},\nu^{(2)}_{1i,k}\right) \sim \mathcal{N}_{2}(0, \Sigma_{1,k})$, $\nu_{2ij,k} \sim \mathcal{N}(0, \sigma_{2})$, $\left(\lambda^{(1)}_{1i,k},\lambda^{(2)}_{1i,k}\right) \sim \mathcal{L}_{2}(0,\Sigma_{1,k})$, and $\lambda_{2ij,k} \sim \mathcal{L}(0, \sigma_{2})$, and the $\Sigma_{1,k}$'s, $k = 1,2,3$, are $2 \times 2$ symmetric matrices, \[ \Sigma_{1,k} = \left[\begin{array}{cc}
\varsigma_{11,k} & \varsigma_{12,k} \\
\varsigma_{12,k} & \varsigma_{22,k}
\end{array}\right]. \] Further, I assumed that the random effects are uncorrelated between treatment groups.
\begin{table}[t!] \caption{Rats weight gain data. Estimates and standard errors (SE) of the fixed effects ($\beta$) from four models. The log-likelihood ($\ell$) is reported in brackets.} \centering \begin{tabular}{lrrrrrr}
\toprule
& $\beta_{0,1}$ & $\beta_{0,2}$ & $\beta_{0,3}$ & $\beta_{1,1}$ & $\beta_{1,2}$ & $\beta_{1,3}$ \\
\hline \multicolumn{7}{l}{\textit{Normal-Normal} ($\ell = -444.4$)}\\ Estimate & 52.880 & 57.700 & 52.086 & 26.480 & 17.050 & 27.143 \\
SE & 2.349 & 2.058 & 1.578 & 1.177 & 0.879 & 1.928 \\ \multicolumn{7}{l}{\textit{Normal-Laplace} ($\ell = -448.4$)}\\
Estimate & 52.934 & 57.568 & 52.928 & 26.383 & 17.208 & 26.791 \\
SE & 2.427 & 2.204 & 1.519 & 1.208 & 0.928 & 2.146 \\ \multicolumn{7}{l}{\textit{Laplace-Normal} ($\ell = -551.6$)}\\
Estimate & 53.069 & 58.392 & 51.104 & 25.620 & 16.794 & 26.665 \\
SE & 1.992 & 1.972 & 1.817 & 0.885 & 0.814 & 1.910 \\ \multicolumn{7}{l}{\textit{Laplace-Laplace} ($\ell = -454.0$)}\\
Estimate & 52.680 & 58.433 & 53.415 & 26.067 & 17.305 & 27.621 \\
SE & 1.960 & 1.762 & 1.041 & 0.924 & 0.748 & 1.353 \\
\hline \end{tabular}\label{tab:3} \end{table}
\begin{table}[ht!] \caption{Rats weight gain data. Estimated correlation matrix of the random intercepts and slopes for each treatment group from three models. The log-likelihood ($\ell$) is reported in brackets.} \centering \begin{tabular}{lrrrrrr}
\toprule \multicolumn{7}{l}{\textit{Normal-Normal} ($\ell = -444.4$)}\\
\hline & \multicolumn{2}{c}{\textit{Treatment 1}} & \multicolumn{2}{c}{\textit{Treatment 2}} & \multicolumn{2}{c}{\textit{Treatment 3}}\\
& Int. & Slope & Int. & Slope & Int. & Slope \\ Int. & 1.000 & & 1.000 & & 1.000 & \\ Slope & $-$0.145 & 1.000 & $-$0.203 & 1.000 & 0.050 & 1.000 \\
\hline \multicolumn{7}{l}{\textit{Normal-Laplace} ($\ell = -448.4$)}\\
\hline & \multicolumn{2}{c}{\textit{Treatment 1}} & \multicolumn{2}{c}{\textit{Treatment 2}} & \multicolumn{2}{c}{\textit{Treatment 3}}\\
& Int. & Slope & Int. & Slope & Int. & Slope \\ Int. & 1.000 & & 1.000 & & 1.000 & \\ Slope & $-$0.076 & 1.000 & $-$0.133 & 1.000 & 0.634 & 1.000 \\
\hline \multicolumn{7}{l}{\textit{Laplace-Normal} ($\ell = -551.6$)}\\
\hline & \multicolumn{2}{c}{\textit{Treatment 1}} & \multicolumn{2}{c}{\textit{Treatment 2}} & \multicolumn{2}{c}{\textit{Treatment 3}}\\
& Int. & Slope & Int. & Slope & Int. & Slope \\ Int. & 1.000 & & 1.000 & & 1.000 & \\ Slope & $-$0.117 & 1.000 & $-$0.065 & 1.000 & 0.194 & 1.000 \\
\hline \multicolumn{7}{l}{\textit{Laplace-Laplace} ($\ell = -454.0$)}\\
\hline & \multicolumn{2}{c}{\textit{Treatment 1}} & \multicolumn{2}{c}{\textit{Treatment 2}} & \multicolumn{2}{c}{\textit{Treatment 3}}\\
& Int. & Slope & Int. & Slope & Int. & Slope \\ Int. & 1.000 & & 1.000 & & 1.000 & \\ Slope & 0.030 & 1.000 & $-$0.294 & 1.000 & 0.876 & 1.000 \\
\hline \end{tabular}\label{tab:4} \end{table}
The NL, LN, and LL models were estimated using the EM algorithm as detailed in Appendix with a Monte Carlo size equal to $100$, fixed at each EM iteration, and a convergence tolerance of $5\cdot 10^{-4}$. The four models gave similar estimates of the fixed effects (Table \ref{tab:3}), although the trajectory in the thiouracil group resulting from the LN model tended to be less steep than the corresponding trajectory resulting from the other three models. However, this difference might be of little practical importance. In contrast, more substantial seemed to be the differences between the estimates of the correlation matrices $D_{1,k}^{-1}\Sigma_{1,k}D_{1,k}^{-1}$, where $D = \mathrm{diag}(\sqrt{\varsigma_{11,k}}, \sqrt{\varsigma_{22,k}})$, $k=1,2,3$ (Table \ref{tab:4}). It is interesting to note that there is disagreement on the magnitude and even direction of some of the estimates. Notably, $\hat{\varsigma}_{12,3}/(\hat{\varsigma}_{11,3} \cdot \hat{\varsigma}_{22,3})$ was smallest for the NN model but it was substantially larger for the NL and LL models. The best fit in terms of the log-likelihood was for the NN model, followed closely by the NL model. The LL model and, especially, the LN model gave smaller log-likelihoods.
\section{Final remarks}\label{sec:5}
In the words of \citet[][p.842]{wilson_1923} ``No phenomenon is better known perhaps, as a plain matter of fact, than that the frequencies which I actually meet in everyday work in economics, in biometrics, or in vital statistics, very frequently fail to conform at all closely to the so-called normal distribution''. Kotz and colleagues (\citeyear{kotz_2001}) echo Wilson's observations on the inadequacy of the normal distribution in many practical applications and give a systematic exposition of the Laplace distribution, an unjustifiably neglected error law which can be ``a natural and sometimes superior alternative to the normal law'' \citep[p.13]{kotz_2001}.
My proposed $2 \times 2$ convolution scheme brings together the normal and Laplace distributions showing that these models represent a \textit{family} of sensible alternatives as they introduce a varying degree of robustness in the modelling process. Estimation can be approached in different ways. The EM algorithm discussed in this paper takes advantage of the scale mixture representation of the Laplace distribution which provides the opportunity for computational simplification. In a simulation study with a moderate sample size (see Section~\ref{sec:A.7} in Appendix), this algorithm provided satisfactory results in terms of mean squared error for the NL and LL models. The estimation of the LN model needed a relatively larger number of Monte Carlo samples to achieve reasonable bias, though the results were never fully satisfactory in terms of efficiency. Finally, model selection has been left out of consideration, but further research on this topic is needed, especially at smaller sample sizes. An interesting starting point is offered by \cite{kundu_2005}.
To reiterate the main point of this study, these convolutions have a large number of potential applications and, as demonstrated using several examples, may provide valuable insight into different aspects of the analysis.
\section*{Appendix}
\subsection{EM estimation}\label{sec:A.1}
Here, I discuss maximum likelihood inference for $\beta$, $\Sigma_1$, and $\sigma_2$ in normal-Laplace (NL), Laplace-normal (LN), and Laplace-Laplace (LL) models. In particular, I develop an estimation approach based on the scale mixture representation of the Laplace distribution. If $T \sim \mathcal{L}(0, \sigma)$ then $T \overset{d}{=} \sigma\sqrt{W}V$, where $W$ and $V$ are, respectively, independent standard exponential and normal variates. Similarly, if $T \sim \mathcal{L}_{q}(0,\Sigma_1)$, then $T \overset{d}{=} \sqrt{W}V$, where $W$ is, again, standard exponential and $V \sim \mathcal{N}_{q}(0, \Sigma_{1})$ \citep{kotz_2001}.
Let $Y_{i} = (Y_{i1}, Y_{i2}, \ldots, Y_{in_{i}})^{\top}$ be a multivariate $n_{i} \times 1$ random vector, and $x_{ij}$ and $z_{ij}$ be, respectively, $p \times 1$ and $q \times 1$ vectors of covariates for the $j$th observation, $j = 1, \ldots, n_{i}$, in cluster $i$, $i = 1, \ldots, M$. Also, let $X_{i}$ and $Z_{i}$ be, respectively, $n_{i} \times p$ and $n_{i} \times q$ design matrices for cluster $i$. I assume that the random effects have variance-covariance matrix $\Sigma_{1}$ for all $i = 1, \ldots, n$ and that the $Y_{i}$'s are independent of one another. The structure of $\Sigma_1$ is purposely left unspecified. The relative precision matrix $\sigma_{2}^{2}\Sigma_{1}^{-1}$ is parameterised in terms of an unrestricted $m$-dimensional vector, $1 \leq m \leq q(q+1)/2$, of non-redundant parameters $\xi$ \citep{pinheiro_bates}. The parameter to be estimated is then $\theta = \left(\beta^{\top},\xi^{\top},\sigma_{2}\right)^{\top}$ of dimension $(p + m + 1) \times 1$. The $n \times n$ identity matrix will be denoted by $I_{n}$.
\subsection{Normal-Laplace convolution}\label{sec:A.2}
Let $w_{i} = (w_{i1}, \ldots, w_{in_{i}})^{\top}$ be a $n_{i} \times 1$ vector of independent standard exponential variates and let $D_{i} = \mathrm{diag}(w_{i})$. The NL model can be written as \begin{equation}\label{eq:A.1} Y_{i} = X_{i}\beta + Z_{i}\nu_{1i} + D^{1/2}_{i}v_{i}, \end{equation} where $\nu_{1i} \sim \mathcal{N}_{q}(0, \Sigma_{1})$ and $v_{i} \sim \mathcal{N}_{n_{i}}(0, \sigma^{2}_{2} I_{n_{i}})$. The model can be simplified by convolving $\nu_{1i}$ and $v_{i}$ conditional on $w_{i}$, i.e., by integrating out the random effects \[
g\left(y_{i},w_{i}\right) = \int_{\mathbb{R}^q} g\left(y_{i},\nu_{1i}|w_{i}\right)h\left(w_{i}\right) \,\rd \nu_{1i} = g\left(y_{i}|w_{i}\right)h\left(w_{i}\right), \]
where $y_{i}|w_{i} \sim \mathcal{N}_{n_{i}}\left(X_{i}\beta, \Omega_{i}\right)$, $\Omega_{i} = Z_{i}\Sigma_{1}Z_{i}^{\top} + \sigma^{2}_{2}D_{i}$, and $h\left(w_{i}\right) = \prod_{j = 1}^{n_{i}} \exp\left(-w_{ij}\right)$.
\subsection{Laplace-Normal convolution}\label{sec:A.3}
The LN model can be written as \begin{equation}\label{eq:A.2} Y_{i} = X_{i}\beta + \sqrt{w_{i}}Z_{i}v_{i} + \nu_{2i}, \end{equation} where $w_{i}$ is a standard exponential variate, $v_{i} \sim \mathcal{N}_{q}(0, \Sigma_{1})$, and $\nu_{2i} \sim \mathcal{N}_{n_{i}}(0, \sigma^{2}_{2} I_{n_{i}})$. The normal component of the random effects can be integrated out as follows \[
g\left(y_{i},w_{i}\right) = \int_{\mathbb{R}^q} g\left(y_{i},v_{i}|w_{i}\right)h\left(w_{i}\right) \,\rd v_{i} = g\left(y_{i}|w_{i}\right)h\left(w_{i}\right), \]
where $y_{i}|w_{i} \sim \mathcal{N}_{n_{i}}\left(X_{i}\beta, \Omega_{i}\right)$, $\Omega_{i} = w_{i}Z_{i}\Sigma_{1}Z_{i}^{\top} + \sigma^{2}_{2} I_{n_{i}}$, and $h(w_{i}) = \exp\left(-w_{i}\right)$.
\subsection{Laplace-Laplace convolution}\label{sec:A.4}
Let $w_{i} = \left(w_{i,1},w_{i,2}^{\top}\right)^{\top}$ be a $(1 + n_{i}) \times 1$ vector of independent standard exponential variates, where $w_{i,2} = (w_{i1,2}, \ldots, w_{in_{i},2})^{\top}$, and let $D_{i} = \mathrm{diag}\left(w_{i,2}\right)$. The LL model can be written as \begin{equation}\label{eq:A.3} Y_{i} = X_{i}\beta + \sqrt{w_{i,1}}Z_{i}v_{1i} + D^{1/2}_{i}v_{2i}, \end{equation} where $v_{1i} \sim \mathcal{N}_{q}(0, \Sigma_{1})$ and $v_{2i} \sim \mathcal{N}_{n_{i}}(0, \sigma^{2}_{2} I_{n_{i}})$. As before, the joint density can be simplified to \[
g\left(y_{i},w_{i}\right) = \int_{\mathbb{R}^q} g\left(y_{i},v_{1i}|w_{i}\right)h\left(w_{i}\right) \,\rd v_{1i} = g\left(y_{i}|w_{i}\right)h\left(w_{i}\right), \]
where $y_{i}|w_{i} \sim \mathcal{N}_{n_{i}}\left(X_{i}\beta, \Omega_{i}\right)$, $\Omega_{i} = w_{i,1}Z_{i}\Sigma_{1}Z_{i}^{\top} + \sigma^{2}_{2} D_{i}$, and $h\left(w_{i}\right) = \exp(-w_{i,1})\cdot \\ \prod_{j = 1}^{n_{i}} \exp\left(-w_{ij,2}\right)$.
\subsection{The algorithm} \label{sec:A.5}
The joint density $g\left(y,w\right)$ could be further integrated to obtain $g\left(y\right) = \int g\left(y|w\right) \cdot h(w)\,\rd w$. Except for the normal-normal (NN) model, the form of the marginal likelihood of $Y_{i}$ does not seem to have an immediate known form. A numerical integration could have some appeal since this integral would reduce to a Gauss-Laguerre quadrature (h($w$) is standard exponential). However, since quadrature methods are notoriously inefficient if the dimension of the integral is large, I consider an alternative approach based on Monte Carlo EM (MCEM) estimation. In this case, the unobservable variable $w$ is sampled from the conditional density $g(w|y)$. While the Monte Carlo sample size does not depend as much on dimensionality as quadrature methods do, convergence can be slower for MCEM than for quadrature-based methods \citep{mclachlan_2008}.
The $i$th contribution to the complete data log-likelihood for the models (\ref{eq:A.1})-(\ref{eq:A.3}) is given by \begin{equation}\label{eq:A.4}
\ell\left(\theta; y_{i},w_{i}\right) = \log g\left(y_{i}|w_{i}\right) + \log h(w_{i}). \end{equation} Note that $h(w_{i})$ does not depend on $\theta$. The EM approach alternates between an \begin{itemize}
\item[(i)] expectation step (E-step) $Q_{i}(\theta|\theta^{(t)}) = \operatorname{E}_{w|y,\theta^{(t)}}\left\{\ell\left(\theta; y_{i},w_{i}\right) \right\}$, $i = 1, \ldots, M$; and a
\item[(ii)] maximisation step (M-step) $\theta^{(t+1)} = \underset{\theta}{\operatorname{arg\,max}} \ \sum_{i} Q_{i}(\theta|\theta^{(t)})$, \end{itemize}
where $\theta^{(t)}$ is the estimate of the parameter after $t$ cycles. The expectation in step (i) is taken with respect to $h\left(w_{i}|y_{i},\theta^{(t)}\right) \propto g\left(y_{i}|w_{i},\theta^{(t)}\right)h(w_{i})$, that is, the distribution of the unobservable data $w_{i}$ conditional on the observed data $y_{i}$ and the current estimate of $\theta$. Given that the latter density does not have an immediate known form, I consider a Monte Carlo approach and use the following numerical approximation \begin{equation}\label{eq:A.5}
\tilde{Q}_{i}(\theta|\theta^{(t)}) = \dfrac{1}{K}\sum_{k=1}^{K}\left\{\ell\left(\theta; y_{i},w^{(t)}_{ik}\right) \right\}, \end{equation}
where $w^{(t)}_{ik}$ is a vector of appropriate dimensions sampled from $h\left(w_{i}|y_{i},\theta^{(t)}\right)$ at iteration $t$. The number of samples $K$ can be fixed at the same value for all iterations or may vary with $t$. The approximate complete data log-likelihood for all clusters (Q-function), averaged over $w|y$, is given by \begin{align}\label{eq:A.6}
\tilde{Q} (\theta|\theta^{(t)})\equiv \sum_{i=1}^{M}\tilde{Q}_{i}(\theta|\theta^{(t)}) = & \dfrac{1}{K}\sum_{k=1}^{K}\sum_{i=1}^{M}- \frac{n_{i}}{2}\log(2\pi) -\dfrac{1}{2}\log |\Omega_{ik}|\\ \nonumber & - \frac{1}{2}e_{i}^{\top}{\Omega_{ik}}^{-1}e_{i} + \log h\left(w^{(t)}_{ik}\right), \end{align} where $e_{i}=y_{i}-X_{i}\beta$, $\Omega_{ik} = \sigma_{2}^{2} \Psi_{ik}$, \begin{equation*} \Psi_{ik} = \begin{cases} Z_{i}\dot{\Sigma}_{1}Z_{i}^{\top} + D_{ik} & \text{with $D_{ik} = \mathrm{diag}\left(w^{(t)}_{ik}\right)$ for the NL model,}\\[5pt] w^{(t)}_{ik}Z_{i}\dot{\Sigma}_{1}Z_{i}^{\top} + I_{n_{i}} & \text{for the LN model,}\\[5pt] w^{(t)}_{ik,1}Z_{i}\dot{\Sigma}_{1}Z_{i}^{\top} + D_{ik} & \text{with $D_{ik} = \mathrm{diag}\left(w^{(t)}_{ik,2}\right)$ for the LL model,}\\ \end{cases} \end{equation*} and $\dot{\Sigma}_{1}= \sigma_{2}^{-2}\Sigma_{1}$ is the scaled variance-covariance matrix of the random effects. Note that all the information given by $\theta^{(t)}$ is contained in $\Omega_{ik}$ which depends on $w^{(t)}_{ik}$ (the superscript $(t)$ has been dropped from $\Omega_{ik}$, $\Psi_{ik}$, and $D_{ik}$ to ease notation). Furthermore, the parameter $\xi$ is defined to be the vector of non-zero elements of the upper triangle of the matrix logarithm of $U$, where $U$ is the $q \times q$ matrix obtained from the Cholesky decomposition $\dot{\Sigma}^{-1}_{1} = U^{\top}U$ \citep{pinheiro_bates}.
The Q-function (\ref{eq:A.6}) can be easily maximised with respect to $\beta$, $\xi$, and $\sigma_{2}$ using standard (restricted) maximum likelihood formulas for linear mixed models \citep{pinheiro_bates,demidenko_2013}. Indeed, the derivative of (\ref{eq:A.6}) with respect to $\theta$ has the familiar form \begin{equation}\label{eq:A.7}
\tilde{Q}_{\ast}(\theta|\theta^{(t)}) = \left(\begin{array}{c} \dfrac{1}{K}\sum_{k=1}^{K}\sum_{i=1}^{M} \sigma_{2}^{-2} X_{i} \Psi_{ik}^{-1}e_{i}\\[10pt] -\dfrac{1}{2K}\sum_{i=1}^{M}\sum_{k=1}^{K} Z_{i}^{\top}\Psi_{ik}^{-1}Z_{i} - \sigma_{2}^{-2}Z_{i}^{\top}\Psi_{ik}^{-1}e_{i}e_{i}^{\top}\Psi_{ik}^{-1}Z_{i}\\[10pt] -\dfrac{1}{2}N\sigma_{2}^{-2}+\dfrac{1}{2K}\sigma_{2}^{-4}\sum_{k=1}^{K}\sum_{i=1}^{M}e_{i}^{\top}\Psi_{ik}^{-1}e_{i} \end{array}\right), \end{equation}
where $N = \sum_{i}^{M}n_{i}$. Since the system of equations $\tilde{Q}_{\ast}(\theta|\theta^{(t)}) = 0$ does not have a simultaneous closed-form solution, we must resort to an iterative algorithm (e.g., Newton--Raphson). Note, however, that for fixed $\Psi_{ik}$ at iteration $t$, the Q-function is maximised by \[ \hat{\beta} = \left(\sum_{i=1}^{M}X_{i}^{\top}\Psi_{ik}^{-1}X_{i}\right)^{-1}\left(\sum_{i=1}^{M}X_{i}^{\top}\Psi_{ik}^{-1}y_{i}\right). \] Thus, for the NL, LN, and LL models, the EM estimate $\hat{\beta}$ can be seen as the solution of the generalised least squares (GLS) with weights that depend on the sampled values $w_{ik}^{(t)}$. The variance-covariance of $\hat{\beta}$, \[ \mathrm{cov}\left(\hat{\beta}\right) = \sigma^{2}_{2}\left(\sum_{i=1}^{M} X_{i}^{\top}\Psi_{ik}^{-1}X_{i}\right)^{-1}, \] is a by-product of fitting routines from commonly available software.
Similarly, for fixed $\Psi_{ik}$ at iteration $t$, the GLS estimate of $\sigma_{2}^{2}$ is \[ \hat{\sigma}_{2}^{2} = \left(\sum_{i=1}^{M}y_{i}^{\top}\Psi_{ik}^{-1}y_{i}\right) - \left(\sum_{i=1}^{M}X_{i}^{\top}\Psi_{ik}^{-1}y_{i}\right)^{\top}\left(\sum_{i=1}^{M}X_{i}^{\top}\Psi_{ik}^{-1}X_{i}\right)^{-1} \left(\sum_{i=1}^{M}X_{i}^{\top}\Psi_{ik}^{-1}y_{i}\right). \]
The E-step is updated with $\theta^{(t+1)}$ and the algorithm stops when\\$\Delta_{h:t,t+1} \left\{\tilde{Q} (\theta|\theta^{(h)})\right\} < \delta$ or $\Delta_{h:t,t+1} \left\{\theta_{l}^{(h)}\right\} < \delta$, $l = 1,\ldots,p + m + 1$, where $\Delta_{h:t,t+1}\left\{u^{(h)}\right\}$ is the (absolute or relative) change in $u$ between iterations $t$ and $t+1$, and $\delta$ is an appropriately small constant. The starting values $\theta^{(0)}$ can be obtained from an LME model.
Finally, standard errors for $\hat{\theta}$ can be computed using the methods described in \cite{mclachlan_2008}. See also the application of Rubin's rules for multiple imputation to Monte Carlo EM samples \citep{goetghebeur_2000,geraci_farcomeni}.
\subsection{Residual heteroscedasticity and correlation} \label{sec:A.6}
In the previous sections, I assumed that the within-group errors are independent with common scale parameter $\sigma_{2}$. Using the scale mixture representation, it is immediate to extend the NL, LN, and LL models to the case of heteroscedastic and correlated errors. In particular, let's assume $\lambda_{2i} \sim \mathcal{L}_{n_{i}}(0,\Sigma_{2i})$ for the NL and LL convolutions, and $\nu_{2i} \sim \mathcal{N}_{n_{i}}(0, \Sigma_{2i})$ for the LN convolution, with general $\Sigma_{2i}$, $i = 1,\dots,M$. Then the variance-covariance matrix in (\ref{eq:A.6}) can be written as \begin{equation*} \Omega_{ik} = \begin{cases} Z_{i}\Sigma_{1}Z_{i}^{\top} + w^{(t)}_{ik}\Sigma_{2i} & \text{for the NL model,}\\[5pt] w^{(t)}_{ik}Z_{i}\Sigma_{1}Z_{i}^{\top} + \Sigma_{2i} & \text{for the LN model,}\\[5pt] w^{(t)}_{ik,1}Z_{i}\Sigma_{1}Z_{i}^{\top} + w^{(t)}_{ik,2}\Sigma_{2i} & \text{for the LL model.}\\ \end{cases} \end{equation*}
\subsection{Monte Carlo} \label{sec:A.7}
In this section, I report on the results of a small simulation study. The purpose was to investigate the bias, variance, and mean squared error (MSE) of $\hat{\beta}$ and $\hat{\xi}$ for the NN, NL, LN, and LL models when data were generated according to the following four scenarios: \begin{enumerate} \item $Y_{ij} = x_{ij}^{\top}\beta + z_{ij}^{\top}\nu_{1i} + \nu_{2ij}$, \item $Y_{ij} = x_{ij}^{\top}\beta + z_{ij}^{\top}\nu_{1i} + \lambda_{2ij}$, \item $Y_{ij} = x_{ij}^{\top}\beta + z_{ij}^{\top}\lambda_{1i} + \nu_{2ij}$, \item $Y_{ij} = x_{ij}^{\top}\beta + z_{ij}^{\top}\lambda_{1i} + \lambda_{2ij}$, \end{enumerate} where $\beta = (\beta_{0}, \beta_{1})^{\top} = (1, 2)^{\top}$, $x_{ij} = (1, x_{1ij})^{\top}$, $z_{ij} = x_{ij}$, with $x_{1ij} = \gamma_{i} + \zeta_{ij}$, $\gamma_{i}\sim \mathcal{N}(0,1)$, and $\zeta_{ij}\sim \mathcal{N}(0,1)$. The random effects were sampled from multivariate normal ($\nu_{1}$) or Laplace ($\lambda_{1}$) distributions with variance-covariance \[ \Sigma_{1} = \left[\begin{array}{cc}
3 & 1 \\
1 & 2
\end{array}\right], \] while the errors were drawn from normal ($\nu_{2}$) or Laplace ($\lambda_{2}$) distributions with scale $\sigma_2 = 2$, independently. The unrestricted parameter for $\sigma_{2}^{2}\Sigma_{1}^{-1}$ is given by $\xi = (\xi_{1}, \xi_{2}, \xi_{3})^{\top} = (-0.183, 0.215, -0.398)^{\top}$.
A balanced design with $n = 5$ repeated measurements per cluster and $M = 100$ clusters was used. For each scenario, 100 datasets were replicated. NN models were fitted using MLE routines from the \texttt{nlme} package \citep{pinheiro_2014}. The NL, LN, and LL models were fitted using the EM algorithm discussed above. In particular, Monte Carlo samples for the E-step were drawn using an adaptive rejection Metropolis sampler \citep{gilks} as implemented in the package \texttt{HI} \citep{petris}. The number of samples was set to increase at each EM iteration as a multiple of 20, capped at 500, thus $K = \min\{20\cdot t,500\}$, $t = 1, 2, \ldots$. The Q-function (\ref{eq:A.6}) was maximised using the MLE equations for NN models \citep{demidenko_2013}. The convergence criterion was defined as $\Delta_{h:t,t+1} \left\{\tilde{Q} (\theta|\theta^{(h)})\right\} < 0.001$ and the maximum number of EM iterations was set to 100.
The results of the simulation study are reported in Tables~\ref{tab:5}-\ref{tab:7}. The NL and LL showed some advantages in terms of bias as compared to the NN model in all considered scenarios, including when the data were generated from a NN model. However, in the latter case the lower bias was more than compensated by a larger variability which made the MSE for NL and LL models up to about $44\%$ larger than that for the NN model. In contrast, the NN model was less competitive than the NL and LL models when data were generated according to these two scenarios, with losses up to about $40\%$ in terms of MSE.
The LN model's performance was somewhat poor, even when the data were generated from a LN model. In a separate analysis using the same data (results not shown), the NL models were re-estimated with the number $K$ of Monte Carlo samples fixed at 500 at all iterations. The relative bias decreased to values below or near 1 for both $\hat{\beta}$ and $\hat{\xi}$, whereas the relative MSE was still above 1.
The average estimation times (standard deviation) for the NL, LN, and LL models were, respectively, 6.6 (10.0), 17.7 (25.3), and 9.8 (11.0) minutes on a 64-bit operating system machine with 16 Gb of RAM and quad-core processor at 3.60 GHz. The average number of iterations to convergence for all these three models was 14 (standard deviation 12).
\begin{table}[h!] \caption{The estimated bias of $\hat{\beta}$ and $\hat{\xi}$ for the normal-normal (NN) model is reported in brackets. The bias for the normal-Laplace (NL), Laplace-normal (LN), and Laplace-Laplace (LL) models is relative to the NN model.} \centering \begin{tabular}{lrrrrr} \toprule
& $\hat{\beta}_{0}$ & $\hat{\beta}_{1}$ & $\hat{\xi}_{1}$ & $\hat{\xi}_{2}$ & $\hat{\xi}_{3}$\\ \hline \multicolumn{6}{l}{\textit{Scenario 1: NN data}}\\ \hline NN & ($-$0.012) & (0.006) & ($-$0.036) & (0.007) & ($-$0.035) \\ NL & 0.324 & 0.254 & 1.648 & $-$0.418 & 1.977 \\ LN & 0.088 & 1.424 & $-$1.823 & 8.010 & $-$2.997 \\ LL & 0.840 & $-$0.369 & $-$0.402 & 0.784 & $-$0.001 \\ \hline \multicolumn{6}{l}{\textit{Scenario 2: NL data}}\\ \hline NN & (0.002) & ($-$0.016) & ($-$0.021) & (0.014) & ($-$0.062) \\ NL & $-$1.301 & 0.503 & 0.970 & 0.848 & 0.881 \\ \hline \multicolumn{6}{l}{\textit{Scenario 3: LN data}}\\ \hline NN & ($-$0.005) & (0.020) & ($-$0.072) & (0.044) & ($-$0.042) \\ LN & 1.731 & 1.232 & $-$0.243 & 2.363 & $-$1.822 \\ \hline \multicolumn{6}{l}{\textit{Scenario 4: LL data}}\\ \hline NN & ($-$0.006) & (0.012) & ($-$0.029) & (0.023) & ($-$0.079) \\ LL & 0.716 & 0.483 & $-$0.504 & 0.829 & 0.345 \\ \hline \end{tabular}\label{tab:5} \end{table}
\begin{table}[h!] \caption{The estimated variance of $\hat{\beta}$ and $\hat{\xi}$ for the normal-normal (NN) model is reported in brackets. The variance for the normal-Laplace (NL), Laplace-normal (LN), and Laplace-Laplace (LL) models is relative to the NN model.} \centering \begin{tabular}{lrrrrr} \toprule
& $\hat{\beta}_{0}$ & $\hat{\beta}_{1}$ & $\hat{\xi}_{1}$ & $\hat{\xi}_{2}$ & $\hat{\xi}_{3}$\\ \hline \multicolumn{6}{l}{\textit{Scenario 1: NN data}}\\ \hline NN & (0.040) & (0.028) & (0.012) & (0.005) & (0.014) \\ NL & 1.123 & 1.081 & 1.021 & 0.994 & 1.259 \\ LN & 1.362 & 1.842 & 2.207 & 2.709 & 1.665\\ LL & 1.287 & 1.441 & 1.174 & 1.246 & 1.087 \\ \hline \multicolumn{6}{l}{\textit{Scenario 2: NL data}}\\ \hline NN & (0.048) & (0.029) & (0.018) & (0.007) & (0.016) \\ NL & 0.894 & 1.042 & 0.925 & 0.904 & 0.843 \\ \hline \multicolumn{6}{l}{\textit{Scenario 3: LN data}}\\ \hline NN & (0.034) & (0.037) & (0.039) & (0.015) & (0.040) \\ LN & 1.126 & 1.146 & 2.020 & 2.211 & 1.691 \\ \hline \multicolumn{6}{l}{\textit{Scenario 4: LL data}}\\ \hline NN & (0.043) & (0.024) & (0.026) & (0.011) & (0.026) \\ LL & 0.744 & 0.862 & 0.813 & 0.598 & 0.997 \\ \hline \end{tabular}\label{tab:6} \end{table}
\begin{table}[h!] \caption{The estimated mean squared error (MSE) of $\hat{\beta}$ and $\hat{\xi}$ for the normal-normal (NN) model is reported in brackets. The MSE for the normal-Laplace (NL), Laplace-normal (LN), and Laplace-Laplace (LL) models is relative to the NN model.} \centering \begin{tabular}{lrrrrr} \toprule
& $\hat{\beta}_{0}$ & $\hat{\beta}_{1}$ & $\hat{\xi}_{1}$ & $\hat{\xi}_{2}$ & $\hat{\xi}_{3}$\\ \hline \multicolumn{6}{l}{\textit{Scenario 1: NN data}}\\ \hline NN & (0.040) & (0.028) & (0.013) & (0.005) & (0.016) \\ NL & 1.119 & 1.080 & 1.190 & 0.987 & 1.471 \\ LN & 1.357 & 1.842 & 2.319 & 3.242 & 2.248 \\ LL & 1.285 & 1.440 & 1.073 & 1.240 & 1.001 \\ \hline \multicolumn{6}{l}{\textit{Scenario 2: NL data}}\\ \hline NN & (0.048) & (0.029) & (0.018) & (0.007) & (0.020) \\ NL & 0.894 & 1.035 & 0.925 & 0.898 & 0.830 \\ \hline \multicolumn{6}{l}{\textit{Scenario 3: LN data}}\\ \hline NN & (0.034) & (0.038) & (0.044) & (0.017) & (0.042) \\ LN & 1.127 & 1.150 & 1.787 & 2.598 & 1.759 \\ \hline \multicolumn{6}{l}{\textit{Scenario 4: LL data}}\\ \hline NN & (0.043) & (0.025) & (0.027) & (0.012) & (0.033) \\ LL & 0.744 & 0.859 & 0.796 & 0.602 & 0.830 \\ \hline \end{tabular}\label{tab:7} \end{table}
\end{document} | arXiv |
Triangle $ABC$ has side lengths $AB = 12$, $BC = 25$, and $CA = 17$. Rectangle $PQRS$ has vertex $P$ on $\overline{AB}$, vertex $Q$ on $\overline{AC}$, and vertices $R$ and $S$ on $\overline{BC}$. In terms of the side length $PQ = \omega$, the area of $PQRS$ can be expressed as the quadratic polynomial\[Area(PQRS) = \alpha \omega - \beta \omega^2.\]
Then the coefficient $\beta = \frac{m}{n}$, where $m$ and $n$ are relatively prime positive integers. Find $m+n$.
If $\omega = 25$, the area of rectangle $PQRS$ is $0$, so
\[\alpha\omega - \beta\omega^2 = 25\alpha - 625\beta = 0\]
and $\alpha = 25\beta$. If $\omega = \frac{25}{2}$, we can reflect $APQ$ over $PQ$, $PBS$ over $PS$, and $QCR$ over $QR$ to completely cover rectangle $PQRS$, so the area of $PQRS$ is half the area of the triangle. Using Heron's formula, since $s = \frac{12 + 17 + 25}{2} = 27$,
\[[ABC] = \sqrt{27 \cdot 15 \cdot 10 \cdot 2} = 90\]
so
\[45 = \alpha\omega - \beta\omega^2 = \frac{625}{2} \beta - \beta\frac{625}{4} = \beta\frac{625}{4}\]
and
\[\beta = \frac{180}{625} = \frac{36}{125}\]
so the answer is $m + n = 36 + 125 = \boxed{161}$. | Math Dataset |
Focus on: All days Oct 8, 2018 Oct 9, 2018 Oct 10, 2018 Oct 11, 2018 Oct 12, 2018 All sessions Mini Worshop on Future of UHECR POSTER SESSION Registration Sessions Social Event Welcome Cocktail Hide Contributions
Ultra High Energy Cosmic Rays 2018
Oct 8, 2018, 12:00 PM → Oct 12, 2018, 7:00 PM Europe/Paris
Friedel Amphitheater (Ecole Supérieure de Chimie, Paris)
Friedel Amphitheater
Ecole Supérieure de Chimie, Paris
Chimie ParisTech École Nationale Supérieure de Chimie de Paris 11, rue Pierre et Marie Curie 75231 PARIS Cedex 05
Ralph Engel (Karlsruhe Institute of Technology)
Affiche-UHECR2018-web.pdf
booklet_1oct2018.pdf
Proceedingstemplate.zip
Registration to UHECR 2018
Agustín Sánchez Losa
Alan Watson
Alexander Korochkin
Alexey Yushkov
Alvaro Taboada
Amy Connolly
Anabella Araudo
Analisa Gabriela Mariazzi
Andreas Haungs
Andrew Koshelkin
Andrey Grinyuk
Andrii Neronov
Antoine Letessier Selvon
Antonella Castellina
Antony Escudie
Arjen van Vliet
Arman Tursunov
Armando di Matteo
Bianca Keilhauer
Björn Eichmann
Bouyahiaoui Makarim
Bruce Dawson
Charles Jui
Claire Guépin
Corinne BERAT
Darko Veberic
David d'Enterria
David Schmidt
Dennis Soldin
Dmitri Ivanov
Dmitri Semikoz
Donghwa Kang
Douglas Bergman
Eiji Kido
Enrique Zas
Etienne Parizot
felix riehn
Foteini Oikonomou
Francesca Bisconti
Francesco Salamida
Fred Sarazin
Gordon Thomson
Grigory Rubtsov
Günter Sigl
Hang Bae Kim
Hans Dembinski
Hans Klages
Hermann-Josef Mathes
Hiroaki Menjo
Hiroyuki Sagawa
Iftach Sadeh
IL H. PARK
Ioana Maris
Isabelle Lhenry-Yvon
Ivan Karpikov
Jaime Alvarez-Muniz
Jean-Noël CAPDEVIELLE
Jihyun Kim
JinLin Han
Joao de Mello Neto
John Kirk
John Krizmanic
John Matthews`
Jon Paul Lundquist
Jonas Heinze
Jonathan Biteau
Jörg Hörandel
Karen Andeen
Karl-Heinz Kampert
Kazumasa Kawata
Ke Fang
Kenji Shinozaki
Kevin-Druis Merenda
Krzysztof Piotrzkowski
Kumiko Kotera
Laura Valore
Leonidas RESVANIS
Lorenzo Caccianiga
Lorenzo Perrone
Lu Lu
Luis del Peral
Marco Ricci
Marcus Wirtz
Maria D Rodriguez Frias
Mario Bertaina
Markus Ackermann
Markus Roth
Martin Schimassek
Martina Bohacova
Matthias Kleifges
Michael Kachelriess
Michael Unger
Mikhail Panasyuk
Mikhail Zotov
Mohamed Cherif Talai
Olivier Deligny
Olivier Martineau
Paolo Lipari
Pavel Klimov
Peter Grieder
Peter Tinyakov
Philippe Gorodetzky
Piera Luisa Ghia
Piergiorgio Picozza
Pierre Sokolsky
Quentin Luce
Ralf Ulrich
Ralph Engel
Roberta Sparvoli
ROSA MAYTA PALACIOS
Ryuji Takeishi
Sarah Mueller
Sergey Ostapchenko
Sergio Sciutto
Shigeo Kimura
Shoichi Ogio
Sofia Andringa
Stavros Nonis
Takashi Sako
Takayuki Tomida
Tanguy Pierog
Teresa Bister
Tiina Suomijarvi
Tim Huege
tokonatsu yamamoto
Tony Bell
Toshihiro Fujii
Valentin Decoene
Valerio Verzi
Vasily Prosin
Vladimir Novotny
William Hanlon
Yuichiro Tameda
Mme Isabelle Lhenry-Yvon
[email protected]
Mon, Oct 8
Tue, Oct 9
Wed, Oct 10
Thu, Oct 11
1:00 PM → 2:00 PM
Registration Friedel Amphitheater
Sessions Friedel Amphitheater
Convener: Isabelle Lhenry-Yvon (IPN Orsay)
Welcome and Opening 5m
The Highest Energy Particles in Nature – the Past, the Present and the Future 45m
The Highest Energy Particles in Nature – the Past, the Present and the Future
Since the earliest days cosmic-ray physicists have been studying the highest-energy particles in Nature. A basic understanding of the development of electromagnetic cascades led to the first targeted searches for air showers and, soon after the discovery of charged and neutral pions, the concept of using the muon content to find the mass of the primary particles was proposed. Progress in the field has relied on the conception and mastery of new techniques, including the development of Monte Carlo simulations, and by the levels of funding available. The challenge of measuring the direction of high-energy cosmic rays was solved through the development of scintillator arrays, while the ability to detect Cherenkov light and fluorescence radiation has aided model-free estimates of the primary energy. The radio technique, demonstrated in 1965 but abandoned 10 years later, is again showing promise. I will describe something of the history of the development of the experimental methods exposing the long lead-times between their conception and successful implementation.
However the challenge of determining the mass of the primary particles requires knowledge of hadronic physics at centre-of-mass energies well-beyond those reached at the LHC, while details of key pion interactions are seriously lacking. It may be that we need to exploit anisotropies and magnetic fields in some smart manner to solve this problem. I will comment on the present state of knowledge of the key parameters to set the stage for the Working Group reports that are an important feature of this meeting. I will also discuss briefly discuss prospects for future developments in space and on the ground.
Speaker: Prof. Alan Watson (University of Leeds)
2018 10 Watson UHECR2018.pdf
TA Spectrum 25m
Telescope Array (TA) is measuring cosmic rays of energies from PeV to 100 EeV and higher in the Northern hemisphere. TA has two parts: main TA and the TA low energy extension (TALE). Main TA is a hybrid detector that consists of 507 plastic scintillation counters on a 1200m - spaced square grid that are overlooked by three fluorescence detector stations. TALE is also a hybrid detector and it consists of additional fluorescence telescopes arranged to view higher elevations and an infill array of 100 plastic scintillation counters. In this work, we present a combined spectrum, over 5 orders of magnitude in energy, measured by the TA and TALE and compare this results with other experiments.
Speaker: Dmitri Ivanov (University of Utah)
Ivanov_Uhecr2018_TA_spectrum_v04.pdf
Measurement of energy spectrum of ultra-high energy cosmic rays with the Pierre Auger Observatory 25m
The energy spectrum of high-energy cosmic rays measured using the Pierre Auger Observatory is presented. The measurements extend over three orders of magnitude in energy from 3 x 10^17 eV up to the very end of the spectrum and they benefit of the almost calorimetric estimation of the shower energies performed with the fluorescence telescopes. The huge amount of data collected with the surface detector allowed us to measure the spectrum in different regions of the sky with high precision.
We will present the results of the measurements together with a detailed description of the systematic uncertainties.
Speaker: Valerio Verzi (INFN Roma "Tor Vergata")
AugerSpectrum-UHECR2018.pdf
Auger-TA energy spectrum working group report 25m
The energy spectrum of ultra-high energy cosmic rays is the most emblematic observable for describing these particles. Beyond a few tens of EeV, the Pierre Auger Observatory and the Telescope Array, currently being exploited, provide the largest exposures ever accumulated in the Northern and the Southern hemispheres to measure independently a suppression of the intensity, in a complementary way in terms of the coverage of the sky. However, the comparison of the spectra shows differences that are not reducible to an overall uncertainty on the calibration of the energy scale used to reconstruct the extensive air showers. In line with the previous editions of the UHECR workshops, a working group common to both experiments examined these differences by focusing this time on quantification of these differences in the region of the sky commonly observed, where the spectra should be in agreement within uncertainties when directional-exposure effects are accounting for. These differences are compared with the systematic uncertainties of each experiment. We have also revisited the methods of determining cosmic ray energies and deriving the energy spectrum. We present the SD spectrum from energy calibration based on the constant intensity cut (CIC) method, SD spectrum from the Monte-Carlo based attenuation correction, and the hybrid spectrum, where the energies are determined from the longitudinal profile seen by the fluorescence detector.
Ivanov_Uhecr2018_spectrum_workging_group_report_v05.pdf
Coffe break 40m Main Hall (Ecole Supérieure de Chimie)
Ecole Supérieure de Chimie
Convener: Prof. Gordon Thomson (University of Utah)
Minimal model of UHECR and IceCube neutrinos 20m
In this talk I'll present minimal model, which explain UHECR spectrum and composition and at the same time explain IceCube astrophysical neutrino signal (M.Kachelriess et al, ``Minimal model for extragalactic cosmic rays and neutrinos,'' Phys.Rev.D 96}, 083006 (2017) Also I'll discuss galactic-extragalactic transition in context of this model.
Speaker: Mr Dmitri Semikoz (APC, Paris)
UHECR2018_Semikoz.pdf
NICHE: Air-Cherenkov light observation at the TA site 20m
An array of non-imaging Cherenkov light collectors has recently been installed at the Telescope Array Middle Drum site, in the field-of-view of the TALE FD telescopes. This allows for imaging/non-imaging Cherenkov hybrid observations of air showers in the energy range just above 1 PeV. The performance of the array and the first analyses using hybrid measurements will be presented.
Speaker: Prof. Douglas Bergman (University of Utah)
NICHE.pdf
Data-driven model of the cosmic-ray flux and mass composition over all energies 20m
We present a parametrisation of the cosmic-ray flux and its mass composition over an energy range from 1 GeV to $10^{11}$ GeV, which can be used for theoretical calculations. The parametrisation provides a summary of the experimental state-of-the-art for individual elements from proton to nickel. We seamlessly combine measurements of the flux of individual elements from high-precision satellites and balloon experiments with indirect measurements of mass groups from the leading air shower experiments. We propagate both statistical and systematic uncertainties with correlations, and obtain a large flux covariance matrix as a result which can be further propagated. Variations in the energy scales of individual experiments are taken into account with nuisance parameters. We obtain a unified energy scale and adjustment factors for the energy scales of the participating experiments. Our fit has a reduced chi2 value of 1, showing that the data sets are in good agreement, if systematic uncertainties are taken into account.
Speaker: Hans Dembinski (Max Planck Institute for Nuclear Physics, Heidelberg)
dembinski_gsf.pdf
Welcome Cocktail Main Hall (Ecole Supérieure de Chimie)
9:00 AM → 10:30 AM
Convener: Hiroyuki Sagawa (Institute for Cosmic Ray Research, University of Tokyo)
Particle Acceleration in Radio Galaxies 30m
Ultra-high energy cosmic rays pose an extreme challenge to theories of particle acceleration. We discuss the reasons why diffusive acceleration by shocks is a leading contender. A crucial aspect of shock acceleration is that cosmic rays must be efficiently scattered by magnetic field. This requires magnetic field amplification on scales comparable with the cosmic ray Larmor radius, which in turn indicates that the shocks cannot be fully relativistic; nor can the shocks be too slow. The lower limit, arising from the Hillas condition, on the energy processed in the shock severely restricts the range of possible sources of UHECR. These conditions combine to make the lobes of radio galaxies a likely source of UHECR.
Speaker: Prof. Tony Bell (University of Oxford)
1_Tony_Bell.pdf
Estimates of the Cosmic-Ray Composition with the Pierre Auger Observatory 20m
We present measurements from the Pierre Auger Observatory related to
mass composition of ultra-high energy cosmic rays.
Using the fluorescence telescopes of the Observatory we determine the
distribution of shower maxima (Xmax) from 10^17.2 to 10^19.6 eV and
derive estimates of the mean and variance of the average logarithmic
mass of cosmic rays. The fraction of p, He, N and Fe nuclei as a
function of energy is derived by fitting the Xmax distribution with
templates from air shower simulations using the most recent version of
LHC-tuned hadronic interaction models.
Furthermore, we will discuss the analysis of the time structure of the
signals from air showers recorded with the water-Cherenkov detectors
to study the mass composition from 10^17.5 to 10^20 eV.
Speaker: Michael Unger (KIT)
uhecr_Composition_Auger.pdf
Measurements of UHECR Mass Composition by Telescope Array 20m
Telescope Array (TA) has recently published results of nearly nine years of $X_{\mathrm{max}}$ observations providing it's highest statistics measurement of UHECR mass composition to date for energies exceeding $10^{18.2}$ eV. This analysis measured agreement of observed data with results expected for four different single elements. Instead of relying only on the first and second moments of $X_{\mathrm{max}}$ distributions, we have employed a morphological test of agreement between data and Monte Carlo to allow for systematic uncertainties in data and in current UHECR hadronic models. Results of this latest analysis and implications of UHECR composition observed by TA will be presented. TA can utilize different analysis methods to understand composition as both a crosscheck on results and as a tool to understand systematics affecting $X_{\mathrm{max}}$ measurements. The different analysis efforts underway at TA to understand composition will also be discussed.
Speaker: William Hanlon (University of Utah)
Measurements of UHECR Mass Composition by Telescope Array.pdf
Depth of maximum of air-shower profiles: testing the compatibility of measurements performed at the Pierre Auger Observatory and the Telescope Array experiment 20m
At the Pierre Auger Observatory and the Telescope Array (TA) experiment the measurements of depths of maximum of air-shower profiles, $X_{\rm max}$, are performed using direct observations of the longitudinal development of showers with the help of the fluorescence telescopes. Though the same detection technique is used by both experiments, the straightforward comparison of the characteristics of the measured $X_{\rm max}$ distributions is not possible due to the different approaches to the analysis of the recorded events. In this work, the Auger-TA Composition Working Group presents a technique to compare the $X_{\rm max}$ measurements from the Auger Observatory and TA. Using this technique the compatibility of the measured $X_{\rm max}$ distributions and of their first two moments is tested for energies $E > 10^{18.2}$ eV. The results of the tests show that the characteristics of the $X_{\rm max}$ distributions recorded by the Auger Observatory and TA are compatible within the systematic and statistical uncertainties.
Speaker: Alexey Yushkov (Institute of Physics AS CR, Prague)
Yushkov_MassWG_AugerTA_UHECR2018_Presented.pdf
10:30 AM → 11:00 AM
Coffee break 30m Hall
Chimie Paris Tech
POSTER SESSION Main Hall (Ecole Supérieure de Chimie)
Search and study of extensive air shower events with the TUS space experiment. 3m
The TUS experiment is designed to investigate the ultra high energy cosmic rays (UHECR) at energy ∼100 EeV from the space orbit by the UV radiation measurement of extensive air showers (EAS). It is the first orbital telescope aimed for such measurements and is taking data since April 28, 2016. TUS detector consists of a modular Fresnel mirror and a photo receiver matrix with a field of view ±4.5$^{\circ}$ and the number of PMT pixels 16x16. The DAQ electronics has a main mode of operation with 0.8 μs temporal resolution and a 200 μs duration of measured waveforms. Spatial resolution in the atmosphere is 5 km with a total field of view of about 80x80 km${^2}$ . The TUS apparatus structure, methods of UHECR on-line selection and off-line data analysis are described. A few UHECR EAS candidates were found. Preliminary results of their investigation and comparison with the corresponding Monte-Carlo events are presented.
Speaker: Andrey Grinyuk (Joint Institute for Nuclear Research)
Ultra high energy cosmic rays simulations with CONEX code 3m
Nowadays, ultra high energy cosmic rays (UHECR) are subject to intense research of great interest. The existence of such rays with an energy above $10^{20}$ eV is contradicted by the limit GZK due to photo-pion production, or by nuclei photo-disintegration, in the interaction of UHECR with the cosmic microwave background.
In this work, detailed simulations of extensive air showers have been carried out with the help of CONEX program in order to evaluate the shower maximum depth longitudinal profile, $X_{max}$. This parameter and its fluctuations are very sensitive to the primary particle mass.
Speaker: Dr Mohamed Cherif TALAI (Badji Mokhtar University of Annaba, Department of Physics )
UHECR2018.pdf
A Quality Control of High Speed Photon Detector 3m
High speed photon detectors are one of the most important tools for observations of high energy cosmic rays. As technology of photon detectors and its read-out electronics improved rapidly, it became possible to observe cosmic rays with time resolution better than one nano-second. To utilize such devices effectively, calibration using a short-pulse light source is mandatory. We have developed a pulse laser of which width is 60 ps and peak intensity is adjustable up to 100 mW. This pulse laser is composed of a simple electric circuit and a laser diode. Details of this pulse laser and its application for quality controls of photon detectors are reported in this contribution.
Speakers: Mr Yusuke Inome (Konan University, Icrr), Prof. Tokonatsu Yamamoto (Konan University, ICRR)
Blazar flares as the origin of high-energy astrophysical neutrinos? 3m
The IceCube Collaboration recently announced the detection of a high-energy astrophysical neutrino consistent with arriving from the direction of the blazar TXS 0506+056 during an energetic gamma-ray flare. In light of this finding, we consider the implications for neutrino emission from blazar flares in general. We discuss the likely total contribution of blazar flares to the diffuse neutrino intensity by considering an ensemble of observational constraints. Further, we consider the multi-messenger constraints from single-zone models, showing that neutrino flares must be accompanied by X-ray and gamma-ray emission. Finally, we suggest a two-zone model that can satisfy the X-ray constraints for the 2017 flare of TXS 0506+056, in which the neutrinos are produced via either photomeson or hadronuclear processes.
Speaker: Foteini Oikonomou (ESO)
flare_poster.pdf
Multi-wavelength observation of cosmic-ray air-showers with CODALEMA/EXTASIS 4m
Over the years, significant efforts have been devoted to the understanding of the radio emission of extensive air shower (EAS) in the range [20-80] MHz but, despite some studies led until the nineties, the [1-10] MHz band has remained unused for nearly 30 years. At that time it has been measured by some pioneering experiments but also suggested by theoretical calculations that EAS could produce a strong electric field in this band, and that there is possibly a large increase in the amplitude of the radio pulse with lower frequencies. The EXTASIS experiment, located within the radio astronomy observatory of Nançay and supported by the CODALEMA instrument, aims to reinvestigate the [1-10] MHz band, and to study the so-called "Sudden Death" contribution, the expected radiation electric field created by the particles that are stopped upon arrival to the ground. Currently, EXTASIS has confirmed some results obtained by the pioneering experiments, and tends to bring explanations to the other ones, for instance the role of the underlying atmospheric electric field.
Moreover, CODALEMA has demonstrated that in the most commonly used frequency band ([20-80] MHz) the electric field profile of EAS can be well sampled, and contains all the information needed for the reconstruction of EAS: an automatic comparison between the SELFAS3 simulations and data has been developed, allowing us to reconstruct in (quasi-)real time the latter ones.
Speaker: Antony Escudie (Subatech, IMT Atlantique, Nantes, France)
UHECR2018_poster_escudie.pdf
Development of the calibration device using UAV mounted UV-LED light source for the fluorescence detector 2m
We are developing a standard light source with UV-LED of calibration device for fluorescence detector (FD). This device is called Opt-copter. The standard light source is mounted on the UAV, and it can stay at an arbitrary position within the FOD of the FD. The GPS for surveying is highly accurate (~10 cm) and measures the position of the light source synchronously with the light emission. This allows us to better understand the geometric optics properties of FD.
Speakers: Dr Takayuki Tomida (Shinshu University), For TA collaboration
The Auger@TA Project: Phase II Progress and Plans 2m
The Auger@TA project is a combined effort involving members of both the Pierre Auger Observatory and the Telescope Array experiment (TA) to cross-calibrate detectors and compare results on air showers detected at one location. We have recently reported results from Phase I of the project, during which we collected and presented data from two Auger water-Cherenkov surface-detector stations deployed into the TA experiment near the Central Laser Facility. For Phase II we will deploy a micro-array of six single-PMT Auger surface detector stations co-located with TA scintillator surface-detector stations. The Auger micro-array will trigger and collect data independently from the TA allowing for a complete end-to-end comparison of detector data, calibration, and reconstructed event quantities on a shower-by-shower basis between the TA and Auger detector systems. We describe progress towards development of the micro-array for Phase II including the preparation of surface detector water tanks, station electronics, wireless communications, trigger and data acquisition. We also outline plans for deploying the Auger@TA micro-array into the Telescope Array experiment during early 2019 with preliminary estimates for coincident air-shower rates.
Speaker: Corbin Covault (Case Western Reserve University)
11:00 AM → 12:30 PM
Convener: Prof. Karl-Heinz Kampert (Bergische Universität Wuppertal)
Telescope Array search for ultra-high energy photons and neutrinos 20m
We report the ultra-high energy (> 1EeV) photon flux limits based on the analysis of the 9 years data from the Telescope Array Surface detector. The multivariate classifier is built upon 16 reconstructed parameters of the extensive air shower. These parameters are related to the curvature and the width of the shower front, the steepness of the lateral distribution function and the timing parameters of the waveforms sensitive to the shower muon content. A total number of 2 photon candidates found in the search which is fully compatible with the expected background. The diffuse flux limits as long as the point source flux limits for all directions in the Northern hemisphere are presented. We report the limits on ultra-high energy down-going neutrino search.
Speaker: Grigory Rubtsov (Institute for Nuclear Research of the Russian Academy of Sciences)
rubtsov_uhecr18.pdf
High-energy emissions from neutron star mergers 30m
ast year, LIGO-VIRGO collaborations reported detection of the first neutron star merger event, GW170817, which accompanied with observations of electromagnetic counterparts from radio to gamma rays. High-energy gamma rays and neutrinos were not observed. However, the mergers of neutron stars are expected to produce these high-energy particles. Relativistic jets are expected to be launched when the neutron stars merge, which can be a source of high-energy neutrinos. Also, the central remnant object after the merger event, either a black hole or a neutron star, can produce high-energy photons weeks to months after the merger. In addition, the neutron star mergers produce massive and fast ejecta, which can be a source of Galactic high-energy cosmic rays, analogous to supernova remnants. In this talk, I will discuss these high-energy processes and prospects for multi-messenger detections related to the neutron star mergers .
Speaker: Shigeo Kimura (Pennsylvania State University)
ShigeoKimura_UHECR2018.pdf
Ultra-high energy neutrinos from neutron-star mergers 20m
In the context of the recent multi-messenger observation of neutron-star merger GW170817, we examine whether such objects could be sources of ultra-high energy astroparticles. At first order, the energetics and the population number is promising to envisage the production of a copious amount of high-energy particles, during the first minutes to weeks from the merger. In addition, the strong radiative and baryonic environment in the kilonova ejecta can be an important background causing energy losses for cosmic-ray nuclei and producing associated high-energy neutrino emissions. We model the evolution of the photon density and the baryonic density in the kilonova ejecta and calculate numerically the signatures in terms of ultra-high energy neutrinos.
Speaker: Valentin Decoene (Institut d'Astrophysique de Paris)
UHECR_VDECOENE.pdf
Ultra-High-Energy Cosmic Rays and Neutrinos from Tidal Disruptions by Massive Black Holes 20m
In addition to the emergence of time domain astronomy, the advent of multi-messenger astronomy opens up a new window on transient high-energy sources. Through the multi-messenger study of the most energetic objects in our universe, two fundamental questions can be addressed: what are the sources of ultra-high energy cosmic rays (UHECRs) and the sources of very-high energy neutrinos?
Jetted Tidal Disruption Events (TDEs) appear as interesting candidate sources of UHECRs, with their impressive energy reservoir and estimated occurrence rates. By modeling and simulating the propagation and interaction of UHECRs in various types of radiative backgrounds, we can evaluate the signatures of TDEs powering jets in UHECRs and neutrinos. We find that we can reproduce the latest UHECR spectrum and composition results of the Auger experiment for a range of reasonable parameters. The diffuse neutrino flux associated with this scenario is found to be subdominant, but nearby events could be detected by IceCube or next-generation detectors such as IceCube-Gen2.
Speaker: Claire Guépin (IAP)
UHECR_2018_CG.pdf
12:30 PM → 2:00 PM
Lunch break 1h 30m
Convener: Tony Bell (University of Oxford)
Supergalactic Structure of Multiplets with the Telescope Array Surface Detector 20m
Evidence of supergalactic structure of multiplets has been found for ultra-high energy cosmic rays (UHECR) with energies above 10$^{19}$ eV using 7 years of data from the Telescope Array (TA) surface detector. The tested hypothesis is that UHECR sources, and intervening magnetic fields, may be correlated with the supergalactic plane, as it is a fit to the average matter density within the GZK horizon. This structure is measured by the average behavior of the strength of intermediate-scale correlations between event energy and distance (multiplets). These multiplets are measured in wedge-like shapes on the spherical surface of the field-of-view to account for uniform and random magnetic fields. The evident structure found is consistent with toy-model simulations of a supergalactic magnetic sheet and the previously published Hot/Coldspot results of TA. The post-trial probability of this feature appearing by chance, on an isotropic sky, is found by Monte Carlo simulation to be ~4.5σ.
Speaker: Jon Paul Lundquist (University of Utah - Telescope Array)
SupergalacticMagnetic_UHECR2.pdf
Ultra-High-Energy Cosmic Rays from Radio Galaxies 20m
Radio galaxies are intensively discussed as the sources of cosmic rays observed above about 3 EeV, called ultra-high energy cosmic rays (UHECRs). The talk presents a first, systematic study that takes the individual characteristics of these sources into account, as well as the impact of the galactic magnetic field, as well as the extragalactic magnetic-field structures up to a distance of 120 Mpc.
It will be shown that the average contribution of radio galaxies taken over a very large volume cannot explain the observed features of UHECRs measured at Earth, but could provide an explanation of the CRs with energies of a few EeV. However, a very good agreement with the spectrum, composition, and arrival-direction distribution of UHECRs measured by the Pierre Auger Observatory (Auger) is obtained by the contribution from only a few ultra-luminous ones, in particular Cygnus A and Centaurus A. Cygnus A needs to provide a mostly light composition of nuclear species dominating up to about 60 EeV, whereas the nearest radio galaxy, Centaurus A, provides a heavy composition and starts to dominate above 60 EeV. Thus, this scenario most likely also predicts differences in UHECR spectrum and composition between the northern and southern hemispheres. In order to account for these differences we include the geometrical exposure effects of Auger and the Telescope Array Observatory, which even improves the aggreement to the their measurements.
Speaker: Björn Eichmann
Eichmann_v2.pdf
Cosmogenic neutrinos from a combined fit of the Auger spectrum and composition 20m
We present a combined fit of the Auger spectrum and composition based on a newly developed code for the extragalactic propagation of cosmic ray nuclei (PriNCe). This very efficient numerical solver of the transport equations allows for scans over large ranges of unknown UHECR source parameters.
Here, we present a study of a generalized source population with three parameters (rigidity-dependent maximal energy, spectral index and redshift evolution). By scanning over the redshift source evolution we derive a robust estimate of the allowed range of the cosmogenic neutrino flux.
We also test the robustness under alternative assumptions for the source model. Specifically the impact of using different air-shower models.
Speaker: Jonas Heinze
2018_10_9_UHECR_workshop_4to3.pdf
The most updated results of the magnetic field structure of the Milky Way 20m
Magnetic fields are an important agent for cosmic rays to transport. The observed all-sky Faraday rotation distribution implies that the magnetic fields in the Galactic halo have a toroidial structure, but the radius range and scale height as well as the strength of the toroidial fields are totally unknown. In the Galactic disk, the magnetic fields probably follow the spiral structure with a strength of a few microGauss and several large-scale reversals in the arm and interarm regions, as inferred from the rotation measure distribution of pulsars inside our Milky Way and from the rotation measure difference between distant pulsars and the radio sources behind the Galactic disk. See Han JL (2017, ARA&A 55, 111) and Han JL et al. (2018, ApJS 234, 11) for details.
Speaker: Prof. JinLin Han (National Astronomical Observatories, Chinese Academy of Sciences)
JinLInHan.pdf
Ultra-high energy cosmic rays from radio galaxies 20m
The origin of ultra-high energy cosmic rays (UHECRs) is an open question, but radio galaxies offer one of the best candidate acceleration sites. Acceleration at the termination shocks of relativistic jets is problematic because relativistic shocks are poor accelerators to high energy. Using hydrodynamic simulations and general physical arguments, I will show that shocks with non- or mildly relativistic shock velocities can be formed as plasma flows from the termination shock into the radio lobe and that these shocks have suitable characteristics for acceleration to 10-100EeV. I will discuss a model in which giant-lobed radio galaxies such as Centaurus A and Fornax A act as slowly-leaking UHECR reservoirs, with the UHECRs being accelerated during a more powerful past episode. I will also show that Centaurus A, Fornax A and other radio galaxies may explain the observed hotspots in the Auger and TA data at ultra-high energies.
Speaker: James Matthews (University of Oxford)
James-Matthews.pdf
UHECR science with ground-based imaging atmospheric Cherenkov telescopes 20m
Arrays of imaging atmospheric Cherenkov telescopes (IACTs), such as VERITAS and the future CTA observatory, are designed to detect particles of astrophysical origin. IACTs are nominally sensitive to gamma rays and cosmic rays at energies between tens of GeV and hundreds of TeV. As such, they can be used as both direct and indirect probes of particle acceleration to very high energies.
Recent measurements by VERITAS of the cosmic ray electron and iron spectra are discussed. Such observations probe the activity of nearby sources, and may shed light on acceleration mechanisms and on propagation effects. In addition, gamma rays can be used to study ultra high energy cosmic rays (UHECRs), by directly observing their sources. Gamma ray bursts (GRBs) have been proposed as possible UHECR accelerators. More specifically, low-luminosity GRBs are emerging as leading candidates. A new study on the detection prospects for such events with CTA is presented.
Speaker: Dr Iftach Sadeh (DESY-Zeuthen)
20181009_sadeh.pdf
Coffee break and posters 30m Hall
Simulation of the optical performance of the Fluorescence detector Array of Single-pixel Telescopes 4m
The Fluorescence detector Array of Single-pixel Telescopes (FAST) is a proposed large-area, next-generation experiment for the detection of ultra-high energy cosmic rays via the atmospheric fluorescence technique. The telescope's large field-of-view (30°x30°) is imaged by four 200 mm photomultiplier-tubes at the focal plane of a segmented spherical mirror of 1.6 m diameter. Two prototypes are installed and taking data at the Black Rock Mesa site of the Telescope Array experiment in central Utah, USA. We present the process used for optimisation of the optical performance of this compact and low-cost telescope, which is based on a simulation of the telescope's optical point spread function.
Speakers: Dusan Mandat (Institute of Physics of Academy of Science of The Czech Republic), Toshihiro Fujii (ICRR, University of Toyo)
New Constraints on the Random Magnetic Field of the Galaxy 4m
The knowledge of the magnitude and coherence length of the random
component of the Galactic Magnetic Field (GMF) is of fundamental
importance for establishing the rigidity threshold above which
astronomy with charged particles is possible.
Here we present a new study of the random component of the GMF using
synchrotron intensity as measured by Planck, WMAP and Haslam et al and
combine it for the first time with the observed fluctuations of the
rotation measures of extragalactic radio sources. This combined
information allows us to constrain both, the strength and coherence
length of random magnetic field in the Galaxy.
Atmospheric transparency measurement on Telescope Array site by the central laser facility 4m
The TA experiment has three FD stations these containing 38 FDs.
In addition, 16 FD was newly added by TAx4 and TALE.
In order to reconstruct FD observation data to air shower information, it is necessary to calibrate the influence of aerosol attenuation. CLF measures atmospheric transparency of TA site.
Speakers: Dr Takayuki Tomida (Shinshu University), For the Telescope Array collaboration
Investigating an angular correlation between nearby starburst galaxies and ultrahigh-energy cosmic rays with the Telescope Array experiment 5m
The arrival directions of cosmic rays detected by the Pierre Auger Observatory (Auger) with energies above 39 EeV were recently reported to correlate with the positions of 23 nearby starburst galaxies (SBGs): in their best-fit model, 9.7% of the cosmic-ray flux originates from these objects and undergoes angular diffusion on a 12.9° scale. On the other hand, some of the SBGs on their list, including the brightest one (M82), are at northern declinations outside the Auger field of view. Data from detectors in the northern hemisphere would be needed to look for cosmic-ray excesses near these objects. In this work, we preliminarily tested the Auger best-fit model against data collected by the Telescope Array (TA) in a 9-year period, without trying to re-optimize the model parameters for our dataset in order not to introduce statistical penalties. The resulting test statistic (double log-likelihood ratio) was -0.54, corresponding to 1.2σ significance among isotropically generated random datasets, and to -1.3σ significance among ones generated assuming the Auger best-fit model. In other words, our data is still insufficient to conclusively rule out either hypothesis. The ongoing fourfold expansion of TA will collect northern hemisphere data with much more statistics, improving our ability to discriminate between different flux models.
Speaker: Armando di Matteo (ULB, Brussels, Belgium)
Air Shower Structure measured with the Telescope Array Surface Detectors 3m
Telescope Array constructed in Utah USA is a largest air shower observatory in the northern hemisphere aiming at clarifying the origin of UHECRs. In order for better understandings of the air shower phenomenon we report a study on the distribution of arriving signals measured with FADC of the TA Surface detector we use 10 years TA SD data to examine which include delay time to shower front plane and the thickness of the disk of particles. The analysis method consists in selecting data sample extending range from 7.08 to over 100 EeV with a minimal bias and systematics uncertainties to observe a correlation between thickness and the distance of shower axis to each SD which have dependance on signal distribution such as electromagnetic or muonic component, impact parameter, energy, and its effect such as zenith and azimuthal angle along the plane of an EAS.
Speaker: Ms Rosa Mayta Palacios (Osaka City University)
Convener: Prof. Peter Grieder
Latest cosmic-ray results from IceCube and IceTop 20m
The IceCube Neutrino Observatory at the geographic South Pole, with its surface array IceTop, detects three different components of extensive air showers: the total signal at the surface, low energy muons in the periphery of the showers, and high energy muons in the deep array of IceCube. These three components allow for a variety of cosmic ray measurements including the energy spectrum and composition of cosmic rays from the PeV to EeV, the anisotropy in the distribution of cosmic ray arrival directions, the muon density of cosmic ray air showers, and the PeV gamma ray flux. Furthermore, IceTop can be used as a veto for the neutrino measurements. The latest results from these IceTop analyses will be presented along with future plans.
Speaker: Karen Andeen (Marquette University)
1_Andeen_UHECR_2018.pptx
The Cosmic-Ray Energy Spectrum between 2 PeV and 2 EeV Observed with the TALE detector in monocular mode 20m
We present a measurement of the cosmic ray energy spectrum by the Telescope
Array Low-Energy Extension (TALE) air fluorescence detector (FD). The TALE FD
is also sensitive to the Cherenkov light produced by shower particles. Low
energy cosmic rays, in the PeV energy range, are detectable by TALE as
``Cherenkov Events''. Using these events, we measure the energy spectrum from
a low energy of ~2 PeV to an energy greater than 100 PeV. Above 100 PeV TALE
uses the air fluorescence technique to reach energies of a few EeV. In
this talk, we will describe the detector, explain the technique, and present
results from a measurement of the spectrum using ~1080 hours of observation.
The observed spectrum shows a clear steepening near 10^{17.1} eV, along with
an ankle-like structure at 10^{16.2} eV. These features present important
constraints on galactic cosmic rays origin and propagation models. The feature
at 10^{17.1} eV may also mark the end of the galactic cosmic rays flux and the
start of the transition to extra-galactic sources.
Speaker: Charles Jui
2_tale-uhecr2018-abu-zayyad-jui.pdf
KASCADE-Grande: Post-operation analyses and latest results 20m
The KASCADE-Grande experiment has significantly contributed to the current knowledge about the energy spectrum and composition of cosmic rays for energies between the knee and the ankle. Meanwhile, post-LHC versions of the hadronic interaction models are available and used to interpret the entire data set of KASCADE-Grande. In addition, a new, combined analysis of both arrays, KASCADE and Grande, were developed increasing significantly the accuracy of the shower observables. Results of the new analyses of the KASCADE-Grande experiment will be discussed as well as limits on the high-energy gamma ray flux over a large energy range. In addition, further developments of the KASCADE Cosmic Ray Data Centre (KCDC) will be presented.
Speakers: Andreas Haungs (KIT), KASCADE-Grande collaboration
3_Haungs-KG-UHECR18.pdf
Primary Energy Spectrum by the Data of EAS Cherenkov Light Arrays Tunka-133 and TAIGA-HiSCORE 20m
Tunka-133 collected data since 2009. The data of 7 winter seasons (2009-2014 and 2015-2017) are processed and analyzed till now. The new TAIGA-HiSCORE array, designed for gamma astronomy tasks mostly, can be used for reconstruction of the all primary particle energy spectrum too. These two arrays provide the very wide range of primary energy measurements 2.10^14 – 2.10^18 eV with the same method of Cerenkov light registration. The new joint data on the primary energy spectrum in this wide energy range are presented
Speaker: Vasily Prosin
4_uhecr2018_prosin.pdf
Transition from Galactic to Extragalactic Cosmic Rays 30m
Additionally to the all-particle cosmic ray (CR) spectrum, data on the
primary composition and anisotropy have become available from the knee region up to few $\times 10^{19}$ eV. These data point to an early Galactic-extragalactic transition and the presence of Peter's cycle, i.e. a rigidity-dependent maximal energy. Theoretical models have to explain therefore the ankle as a feature in the extragalactic CR spectrum. Moreover, the Galactic CR spectrum has to explain the knee, and has to extend to sufficiently high energies. I review the experimental data and their interpretation, as well as models aiming to reproduce them.
Speaker: Michael Kachelriess (Department of Physics, NTNU)
mk_uhecr.pdf
Convener: Prof. Günter Sigl (University of Hamburg)
Ultra High Energy Cosmic Ray Propagation and Source Signatures 30m
Knowledge about the processes dictating UHECR
losses during their propagation in extragalactic space
allows the secondary species to be used to probe the source
location. In this talk I will cover the state of our knowledge on
these processes, and gives examples about properties of the
sources that may be inferred from the observed secondary
species at Earth. Some suggestion will also be provided as to
how such multi-messenger studies may be taken further in
the future.
Speaker: Prof. Andrew Taylor
1_Andrew_Taylor_UHECR2018.pdf
Galactic and Intergalactic magnetic fields 30m
I will review the status of measurements and modelling of Galactic and intergalactic magnetic fields in the context of multi-messenger astrophysics and in particular of UHECR observations.
Speaker: Prof. Andrii Neronov (University of Geneva & APC, Paris)
UHECR2018_Paris_neronov.pdf
The extragalactic gamma-ray background above 100 MeV 30m
I will review our knowledge about the properties and the origin of the extragalactic gamma-ray background above 100 MeV. Since the universe is transparent to MeV and GeV gamma rays up to very high redshifts, the extragalactic gamma-ray background contains the imprint of all gamma-ray emission from the beginning of star formation until the present day. Its properties have important implications in the context of multi-messenger astronomy and put constraints on beyond-the-standard-model physics.
Speaker: Markus Ackermann (DESY)
EGB_UHECR_2018_v2.0.pdf
Cloud monitoring at Telescope Array site by Visible Fisheye CCD. 3m
The Telescope Array (TA) is an international experiment studying ultra-high energy cosmic rays.
TA uses fluorescence detection technology to observe cosmic rays, and in order to estimate the flux of cosmic rays with the observation of the fluorescence detector (FD), it is necessary to estimate the condition of FD observation area correctly.
Because the cloud has a great influence on the Field Of View (FOV) of the FD, it is necessary to measure the cloud amount and formation on the FOV.
We realize the cloud monitor in the night sky with ccd camera.
In this report, we will introduce the result of CCD cloud monitor.
CRPropa 3.2: Improved and extended open-source astroparticle propagation framework from TeV to ZeV energies 3m
Experimental observations of Galactic and extragalactic cosmic rays, neutrinos and gamma rays in the last decade challenge the theoretical description of both the sources and the transport of these particles. The latest version of the publicly available simulation framework CRPropa 3.2 is a Monte-Carlo based software package capable of providing consistent solutions of the cosmic-ray origin problem. It is not only able to describe the propagation of Galactic and extragalactic cosmic rays in a ballistic single-particle approach, but can also solve a cosmic-ray transport equation, describe the production and propagation of neutrinos and electromagnetic cascades, and simulate the cosmic-ray acceleration inside their sources. This combined approach will allow for a consistent description of cosmic rays, neutrinos and photons from the highest energies down to the TeV range, including electromagnetic cascades down to the GeV range. This contribution will summarize the latest extensions and improvements of the code, e.g. solving the transport equation, revamped electromagnetic cascades, source targeting, cosmic-ray acceleration and many technical enhancements. The new opportunities coming with these developments will be explained including simple user examples.
Speaker: Dr Arjen van Vliet (DESY Zeuthen)
VanVliet_Poster_CRPropa32_UHECR2018.pdf
Origins of Extragalactic Cosmic Ray Nuclei by Contracting Alignment Patterns induced in the Galactic Magnetic Field 3m
We present a novel approach to search for origins of ultra-high energy cosmic rays. In a simultaneous fit to all observed cosmic rays we use the galactic magnetic field as a mass spectrometer and adapt the nuclear charges such that their extragalactic arrival directions are concentrated in as few directions as possible. During the fit the nuclear charges are constraint by the individual energy and shower depth measurements. Using different simulated examples we show that, with the measurements on Earth, reconstruction of extragalactic source directions is possible. In particular, we show in an astrophysical scenario that source directions can be reconstructed even within a substantial isotropic background.
Speaker: Mr Marcus Wirtz (RWTH Aachen University)
contracrting-pattern-uhecr2018.pdf
The detection of UHECRs with the EUSO-TA telescope 3m
EUSO-TA is a cosmic ray detector developed by the JEM-EUSO Collaboration (Joint Experiment Missions for Extreme Universe Space Observatory), observing during nighttime the fluorescence light emitted through the path of extensive air showers in the atmosphere. It is installed at the Telescope Array site in Utah, USA, in front of the fluorescence detector station in Black Rock Mesa, as ground-based pathfinder experiment for a future space-based mission.
EUSO-TA has an optical system with two Fresnel lenses and a focal surface with $6\times6$ multi-anode photomultiplier tubes with 64 channels each, for a total of 2304 channels. The overall field of view is $10.6°\times10.6°$. This detector technology allows the detection of cosmic ray events with high spatial resolution, having each channel a field of view of about $0.2°\times0.2°$ and a temporal resolution of $2.5\,\mu \mbox{s}$.
The observation of the first ultra high energy cosmic rays, supported by dedicated simulation studies, revealed the cosmic ray detection capability of EUSO-TA. Simulations were also used to test the trigger algorithm which will make the detector autonomous rather than working in coincidence with the Telescope Array fluorescence detectors. The foreseen upgrade of EUSO-TA will improve the efficiency of the detector and will increase the statistics of detected events.
In this work I will report on the recent results about the detection capability of EUSO-TA and its limits, passing through the variation of signals depending on the energy and geometry of the extensive air showers.
Speakers: Francesca Bisconti (INFN Sezione di Torino), Mario Bertaina (Univ. of Torino, Italy), Kenji Shinozaki (University of Torino, Italy)
TA SD Spectrum 3m
Telescope Array (TA) is a large cosmic ray detector in the Northern hemisphere that measures cosmic rays of energies from PeV to 100 EeV and higher. Main TA consists of a surface detector (SD) of 507 plastic scintillation counters of 1200 m separation on a square grid that is overlooked by three fluorescence detector stations. We present the cosmic ray energy spectrum measured by the TA SD above 10^18.2 eV and discuss the TA SD measurement and reconstruction techniques that are based on a detailed Monte Carlo simulation of the detector. We will also demonstrate that two different analysis approaches, the constant intensity cuts method and the Monte-Carlo based energy estimation procedure produce the same answer in the energy domain where the TA SD acceptance is constant with energy.
The Atmospheric Electricity Studies at the Pierre Auger Observatory 3m
The Fluorescence Detector (FD) at the Pierre Auger Observatory has triggered on numerous elves since the first observation in 2005, and it has potential for simultaneous Terrestrial Gamma ray Flashes (TGF) detection. In addition, the Surface Detector (SD) observed peculiar events with radially expanding footprints, which are correlated with lightning strikes reconstructed by the World Wide Lightning Location Network (WWLLN).
Emissions of Light from Very low frequency perturbations due to Electromagnetic pulse Sources (elves) expand radially up to 300 km (in ~1 ms) at the base of the ionosphere. With the 100 ns time resolution of the FD, Auger provides the community with a detailed structure of the emission region, necessary for the study of various lighting discharges (ie: compact intra-cloud discharges and energetic in-cloud pulses) possibly associated with the current hot topic in atmospheric electricity physics, TGF's. In 2014, we improved the elves trigger for the Auger FD to allow the acquisition of photon traces up to 300 us and better our reconstruction of the lightning bolt position. In addition, the 30 degree field of view of individual FD telescopes is wide enough to capture lightning-related phenomena happening just above thunderstorms (~20 km altitude) and correlated elves at the base of the ionosphere (~90 km altitude).
Also in 2005, Auger found the first peculiar SD event with long-lasting traces (~10 us) compared to typical muon signals (~0.1 us). Since then, approximately 30 sporadic events were found to have similar, radially expanding footprints and time structures. The footprints vary from 2 to 8 km. Using the reconstruction of the events, we found that the observed timing is consistent with a spherical front expanding at the speed of light with an origin point very close to ground. In addition to the presence of triggered SD stations with high-frequency noise caused by lighting RF signals, many events are in coincidence with WWLLN. More recent focus has been on potential trigger improvements for the detection of future events.
Speaker: Kevin-Druis Merenda
AugerPrime implementation in the Offline simulation and reconstruction framework 3m
The Pierre Auger Observatory is currently upgrading its surface detector array by placing a 3.84 square meter scintillator on top of each of the existing 1660 water-Cherenkov detectors. The differing responses of the two detectors allow for the disentanglement of the muonic and electromagnetic components of extensive air showers, which ultimately facilitates reconstruction of the mass composition of ultra-high-energy cosmic rays on an event-by-event basis. Simulations of the scintillator surface detector enable both an assessment of proposed reconstruction algorithms and the interpretation of real shower measurements. The design and implementation of these simulations within a Geant4-based module inside Auger's software framework (Offline) are presented in addition to the tuning of these simulations to detector response measurements performed using a centimeter precision muon telescope. Augmentations of the Offline framework in order to accommodate the large-scale detector upgrade are also presented.
Speaker: David Schmidt (Karlsruhe Institute of Technology)
Studies for High Energy air shower identification using RF measurements with the ASTRONEU array 4m
The Hellenic Open University (HOU) Cosmic Ray Telescope (ASTRONEU) comprises 9 charged particle detectors and 3 RF antennas arranged in three autonomous stations operating at the University Campus of HOU in the city of Patra. In this work, we extend the analysis of very high energy showers that are detected by more than one station and in coincidence with the RF antennas of the Telescope. We present the angular distributions as well as the energy distribution of the selected showers in comparison to the Monte Carlo (MC) simulations expectations. Special attention is given to the transfer functions of the antennas which are strongly frequency and angular dependent. We find that the RF spectra (at frequencies 30-80 MHz) of the detected showers are exhibiting features of the antenna response predicted by detailed MC simulation suggesting thus, that a single antenna spectrum might give access to the Cosmic Ray arrival direction.
Speaker: Mr Stavros Nonis (Malkou)
UHECR_2018_nonis.pdf
Convener: Andrew Taylor (MPIK)
Inductive Particle Acceleration 30m
Speaker: John KIRK
slides43.pdf
Black hole jets in clusters of galaxies as sources of high-energy cosmic particles 30m
It has been a mystery that with ten orders of magnitude difference in energy, high-energy neutrinos, ultrahigh-energy cosmic rays, and sub-TeV gamma rays all present comparable energy injection rate, hinting an unknown common origin. Here we show that black hole jets embedded in clusters of galaxies may work as sources of all three messengers. By numerically simulating the propagation of cosmic ray particles in the magnetized intracluster medium (ICM), we show that the highest-energy cosmic rays leave the source rectilinearly, the intermediate-energy cosmic rays are confined by their massive host and interact with the ICM gas to produce secondary neutrinos and gamma rays, and the lowest-energy cosmic rays are cooled due to the expansion of the radio lobes inflated by the jets. The energy output required to explain the measurements of all three messengers is consistent with observations and theoretical predictions of black hole jets in clusters.
Speaker: Ke FANG
Fang_cluster.pdf
Multi-messenger Astrophysics at Ultra-High Energy with the Pierre Auger Observatory 20m
The study of correlations between observations of fundamentally different nature from extreme cosmic sources promises extraordinary physical insights into the Universe. With the Pierre Auger Observatory we can significantly contribute to multi-messenger astrophysics by searching for ultra-high energy particles, particularly neutrinos and photons which, being electrically neutral, point back to their origin. Using Pierre Auger Observatory data, stringent limits at EeV energies have been established on the photon and neutrino fluxes from a large fraction of the sky, probing the production mechanisms of ultra-high energy cosmic rays. The good angular resolution and the neutrino identification capabilities of the Observatory at EeV energies allow the follow-up of events detected in gravitational waves, such as the binary mergers observed with the Advanced LIGO/Virgo detectors, or from other energetic sources of particles.
Speaker: Alvarez-Muniz Jaime (Dept. Particle Physics, Univ. Santiago de Compostela)
Auger_multimessenger_UHECR18.pdf
Convener: Enrique Zas
Recent IceCube results - evidences of neutrino emission from the blazar TXS 0506+056 and searches for Glashow resonance 30m
Finally a hundred years after the discovery of cosmic-rays, a blazar has been identified as a source (at ~3 sigma level) of high-energy neutrinos and cosmic-rays thanks to the real-time multimessenger observation lead by the cubic-kilometer IceCube neutrino observatory. In this talk, details of the spatial-timing correlation analysis of the ~290 TeV neutrino event with Fermi light curves will be presented.
The second part of the talk will be dedicated to the searches for the highest energy neutrinos of all flavours with IceCube. In particular, results on the Glashow resonance and implications for the neutrino diffuse spectrum will be shown for the first time. The early-muons originated from hadronic shower generated via Glashow resonance decay could be used to improve cascade direction resolution. Possible sources of the Glashow candidate will be introduced.
Speaker: Lu Lu (Chiba University)
UHECR_Paris_Lu_upload.pdf
Latest results on high-energy cosmic neutrino searches with the ANTARES neutrino telescope 20m
The ANTARES detector is currently the largest undersea neutrino telescope. Located in the Mediterranean Sea at a depth of 2.5 km, 40 km off the Southern coast of France, it has been looking for cosmic neutrinos for more than 10 years. High-energy cosmic neutrino production is strongly linked with cosmic ray production. The latest results from IceCube represent a step forward towards the confirmation of a high energy cosmic ray source. The ANTARES location in the Northern Hemisphere is optimal for the observation of most of the Galactic Plane, including the Galactic Center. It has allowed to contribute independently on constraining the IceCube neutrino excess origin as well as, more recently, the flux from the source identified in the Blazar TXS 0506+056. The latest results of ANTARES on such analyses, including point-like and extended sources, diffuse fluxes, transient phenomena and multi-messenger studies, will be presented.
Speaker: Agustín Sánchez Losa (INFN - Sezione di Bari)
2018-10-10 UHECR2018.pdf
Search for a correlation between the UHECRs measured by the Pierre Auger Observatory and the Telescope Array and the neutrino candidate events from IceCube and ANTARES 20m
We present the results of three searches for correlations between UHECR events measured by the Pierre Auger Observatory and Telescope Array and high energy neutrino candidate events from IceCube and ANTARES. A cross-correlation analysis is performed, where the angular separation between the arrival directions of UHECRs and neutrinos is scanned. The same events are also exploited in a separate search by stacking the neutrino arrival directions: a maximum likelihood approach is used where a modelling of magnetic deflections of UHECRs is included and accounted for. Finally, a similar analysis is performed on stacked UHECR arrival directions and the IceCube and ANTARES samples of through-going muon-track events that were optimised for neutrino point source searches.
Speaker: Dr Lorenzo Caccianiga (Università degli studi di Milano)
UHECR2018 (1).pdf
Overview and results from the first four flights of ANITA 20m
ANITA was designed as a discovery experiment for ultra-high energy (UHE) neutrinos using the radio Askaryan detection technique, launching from McMurdo Station in Antarctica under NASA's long duration balloon program and observing 1.5 million square kilometers of ice at once from an altitude of 40 km. Over ANITA's four flights we set the best constraints on UHE neutrino fluxes above 10^19 eV, unexpectedly observed radio emission from UHE cosmic ray air showers, making improvements to the instrument and lowering thresholds on each flight. I will give an overview of ANITA's remarkable history and plans for the future.
Speaker: Amy Connolly (The Ohio State University)
Connolly_UHECR18.pdf
The cosmogenic neutrino flux determines the fraction of protons in UHECRs 20m
When UHECRs propagate through the universe, cosmogenic neutrinos are created via several interactions. In general, the expected flux of these cosmogenic neutrinos depends on multiple parameters describing the sources and propagation of UHECRs. However, using CRPropa, we show that a 'sweet spot' occurs at a neutrino energy of ~1 EeV. At that energy this flux only depends strongly on two parameters, the source evolution and the fraction of protons in UHECRs. These parameters are already constrained by current neutrino experiments, indicating that the sources of UHECRs cannot have a large proton fraction and a strong source evolution. Upcoming neutrino experiments will be able to constrain the fraction of protons in UHECRs even further, and for any realistic model for the evolution of UHECR sources.
20181010_AvanVliet_ProtonFraction.pdf
TALE surface detector array and TALE hybrid system 3m
The Telescope Array Low-energy Extension (TALE) experiment is a hybrid air shower detector for observation of air showers produced by very high energy cosmic rays above 10^16.5 eV.TALE is located at the north part of the Telescope Array (TA) experiment site in the western desert of Utah, USA. TALE has a surface detector (SD) array made up of 103 scintillation counters, including 40 with 400 m spacing, 36 with 600 m spacing and 27 with 1.2 km spacing, and a Fluorescence Detector (FD) station consisting of ten FD telescopes located at the Telescope Array Middle Drum FD station, which is made up of 14 telescopes. TALE-FD has been operational since 2013. The deployment and construction of the 103 SDs was completed in 2018, and to date about 80% of the array is in operation with a full triggering and DAQ system. Moreover, the hybrid triggering system will be implemented in September 2018. Here we report an overview of the experiment, its capabilities and the technical details of the TALE SD array and the hybrid operations.
Speaker: Prof. Shoichi Ogio (Osaka City University)
tale_uhecr2018_v1.pdf
Search for Extreme Energy Cosmic Rays with the TUS telescope and comparison with ESAF 3m
The Track Ultraviolet Setup (TUS) detector was launched on April 28, 2016 as a part of the scientific payload of the Lomonosov satellite. TUS is a path-finder mission for future space-based observation of Extreme Energy Cosmic Rays (EECRs, E > 5x10^19 eV) with experiments such as K-EUSO. TUS data offer the opportunity to develop strategies in the analysis and reconstruction of the events which will be essential for future space-based missions.
During its operation TUS has detected about 80 thousand events which have been subject to an offline analysis to select among them those that satisfy basic temporal and spatial criteria of EECRs. A few events passed this first screening. In order to perform a deeper analysis of such candidates, a dedicated version of ESAF (EUSO Simulation and Analysis Framework) code as well as a detailed modeling of TUS optics and detector have been developed.
This contribution will report on the results of such an analysis.
Speaker: Mario Bertaina (Univ. of Torino, Italy)
mario_TUS_V2.pdf
Cloud distribution evaluated by the WRF model during the EUSO-SPB1 flight 3m
EUSO-SPB1 was a balloon-borne mission of the JEM-EUSO (Joint Experiment Missions for Extreme Universe Space Observatory) Program aiming at the observation of UHECRs from space. The EUSO-SPB1 telescope was a fluorescence detector with a 1 m2 Fresnel refractive optics and a focal surface covered with 36 multi-anode photomultiplier tubes for a total of 2304 channels covering ~11 degrees FOV. Each channel performed the photon counting every 2.5 µs time frame, allowing for spatiotemporal imaging of the air shower events. Being provided with an active trigger algorithm, EUSO-SPB1 was the first balloon-borne experiment having a potential to detect air shower events initiated by cosmic rays in the range of several EeV. On 25 April 2017, EUSO-SPB1 was launched from Wanaka, New Zealand on the NASA's Super Pressure Balloon that flew at ∼ 16 − 33 km flight altitude for ~292 hours. Before the flight was terminated due to an unexpected gas leakage, we retrieved the ~27 hours data acquired in the air shower detection mode. In the present work, we aim at evaluating the role of the clouds during the operation of EUSO-SPB1. We employ the WRF (Weather Research and Forecasting) model to numerically calculate the cloud distribution in the EUSO-SPB1 FOV. We discuss the keys result of the WRF model and the impact of the clouds on the air shower measurement and on the efficiency of the cosmic ray observation. We will also mention the relevant issues towards future ballon-borne and satellite-based UHECR observation missions.
Speaker: Kenji Shinozaki (University of Torino, Italy)
Determination of the invisible energy of extensive air showers from the data collected at Pierre Auger Observatory 3m
In order to get the primary energy of cosmic rays from their extensive air showers using the fluorescence detection technique, the invisible energy should be added to the measured calorimetric energy. The invisible energy is the energy carried away by particles that do not deposit all their energy in the atmosphere.
It has traditionally been calculated using Monte Carlo simulations that are dependent on the assumed primary particle mass and on model predictions for neutrino and muon production.
In this work the invisible energy is obtained directly from events detected by the Pierre Auger Observatory. The method applied is based on the correlation of the measurements of the muon number at the ground with the invisible energy of the showers. By using it, the systematic uncertainties related to the unknown mass composition and to the high energy hadronic interaction models are significantly reduced, improving in this way the estimation of the energy scale of the Observatory.
Speaker: Dr Analisa Mariazzi (Universidad Nacional de La Plata and CONICET, La Plata, Argentina)
Potential of a scintillator and radio extension of the IceCube surface detector array 3m
An upgrade of the present IceCube surface array (IceTop) with scintillation detectors and possibly radio antennas is foreseen. The enhanced array will calibrate the impact of snow accumulation on the reconstruction of cosmic-ray showers detected by IceTop as well as improve the veto capabilities of the surface array. In addition, such a hybrid surface array of radio antennas, scintillators and Cherenkov tanks will enable a number of complementary science cases for IceCube such as enhanced accuracy to mass composition of cosmic rays, search for PeV photons from the Galactic Center, or more thorough tests of the hadronic interaction models. Two prototype stations with 7 scintillation detectors each have been already deployed at the South Pole in January 2018 where these R&D studies provide a window of opportunity to additionally integrate radio antennas with minimal effort.
Speaker: Andreas Haungs (KIT)
Haungs-UHECR2018-IceTopEnhancement.pdf
Study of muons from ultrahigh energy cosmic ray air showers measured with the Telescope Array experiment 3m
One of the uncertainties in ultrahigh energy cosmic ray (UHECR) observation derives from the hadronic interaction model used for air shower Monte-Carlo (MC) simulations. One may test the hadronic interaction models by comparing the measured number of muons observed at the ground from UHECR induced air showers with the MC prediction.
The Telescope Array (TA) is the largest experiment in the northern hemisphere observing UHECR in Utah, USA. It aims to reveal the origin of UHECRs by studying the energy spectrum, mass composition and anisotropy of cosmic rays by utilizing an array of surface detectors (SDs) and fluorescence detectors. We studied muon densities in the UHE extensive air showers by analyzing the signal of TA SD stations for highly inclined showers. On condition that the muons contribute about 65% of the total signal, the number of particles from air showers is typically 1.88 ± 0.08 (stat.) ± 0.42 (syst.) times larger than the MC prediction with the QGSJET II-03 model for protons. The same feature was also obtained for other hadronic models, such as QGSJET II-04. In this presentation, we report the method and the result of the study of muons from UHECR air showers with the TA data.
Speaker: Dr Ryuji Takeishi (Sungkyunkwan University, South Korea)
TASDmuon_UHECR2018_v1.pdf
Direct measurement of the muon density in air showers with the Pierre Auger Observatory 3m
As part of the upgrade of the Pierre Auger Observatory, the AMIGA (Auger Muons and Infill for the Ground Array) underground muon detector extension will allow for direct muon measurements for showers falling into the 750m SD vertical array. We optimized the AMIGA muon reconstruction procedure by introducing a geometrical correction for muons leaving a signal in multiple detector strips due to their inclined momentum, and deriving a new unbiased parametrization of the muon lateral distribution function. Furthermore, we defined a zenith-independent estimator $\rho_{35}$ of the muon density by parametrizing the attenuation of the muonic signal due to the atmosphere and soil layer above the buried detectors and quantified the relevant systematic uncertainties for AMIGA. The analysis of one year of calibrated data recorded with the prototype array of AMIGA confirms the results of previous studies indicating a significant disagreement between the muon content in simulations and data.
Speaker: Mrs Sarah Mueller (KIT)
poster_uhecr18_amiga_smueller.pdf
Convener: Petr Tinyakov (Universite Libre de Bruxelles)
TA Anisotropy Summary 25m
The Telescope Array (TA) is the largest ultra-high-energy cosmic-ray (UHECR) detector in the northern hemisphere, which consists of 507 surface detector (SD) covering a total 700 km^2 and three fluorescence detector stations. In this presentation, we will summarize recent results on the search for directional anisotropy of UHECRs using the latest data set collected by the TA SD array.
Speaker: Kazumasa Kawata (ICRR, University of Tokyo)
UHECR2018-kawata-v2.pdf
Study of the arrival directions of ultra-high-energy cosmic rays detected at the Pierre Auger Observatory 25m
The distribution of the arrival directions of ultra-high energy cosmic rays is, together with the spectrum and the mass composition, a harbinger of their nature and origin. As such, it has been the subject of intense studies at the Pierre Auger Observatory since its inception in 2004, with two main lines of analysis being pursued at different angular scales and at different energies. One concerns the study of the large-scale anisotropy and of its evolution as a function of energy. The technique used is that of the harmonic analysis, performed over all the energy range accessible by the Observatory, from sub-EeV energies to the highest ones. The other line of analysis regards in turn only the arrival directions of the highest energies cosmic rays, namely those above a few tens of EeV: thanks to the high rigidities at play, it aims at the search for anisotropies on (relatively) low angular scales in association with catalogs of plausible astrophysical sources. The talk will review the outcome of such studies after almost 15 years of data taking. It will also illustrate the careful treatment of data as well as the methods used.
Speaker: Piera Luisa Ghia (IPNO)
AugerAnisotropiesUHECR2018.pdf
Covering the sphere at ultra-high energies: full-sky cosmic-ray maps beyond the ankle and the flux suppression 25m
Despite deflections by Galactic and extragalactic magnetic fields, the distribution of the flux of ultra-high energy cosmic rays (UHECRs) over the celestial sphere remains a most promising observable for the identification of their sources. This distribution is remarkably close to being isotropic. Thanks to a large number of detected events over the past years, a large-scale anisotropy at energies above 8 EeV has been identified, and there are also indications from the Telescope Array and Pierre Auger Observatory Collaborations of deviations from isotropy at intermediate angular scales (~20°) at the highest energies. In this contribution, we map the flux of UHECRs over the full sky at energies beyond each of two major features in the UHECR spectrum - the ankle and the flux suppression -, and we derive limits for anisotropy on different angular scales in the two energy regimes. In particular, full-sky coverage enables constraints on low-order multipole moments without assumptions on the strength of higher-order multipoles. Following previous efforts from the two collaborations, we build full-sky maps accounting for the relative exposure of the arrays and differences in the energy normalizations. These results are obtained by cross-calibrating the UHECR fluxes reconstructed in the declination band around the celestial equator covered by both observatories. We present full-sky maps at energies above ~10 EeV and ~50 EeV, using the largest datasets shared across UHECR collaborations to date. We report on anisotropy searches exploiting full-sky coverage and discuss possible constraints on the distribution of UHECR sources.
Speaker: Jonathan Biteau (IPNO)
Anisotropy_report_AugerTA_v3.pdf
A Close Correlation between TA Hotspot UHECR Events and Local Filaments of Galaxies and its Implication 20m
The Telescope Array (TA) experiment identified a concentration of ultra-high-energy cosmic ray (UHECR) events on the sky, so-called hotspot. Besides the hotspot, the arrival directions of TA events show another characteristic feature, i.e., a deficit of events toward the Virgo cluster. As an effort to understand the sky distribution of TA events, we investigated the structures of galaxies around the hotspot region in the local universe. We here report a finding that there are filaments of galaxies, connected to the Virgo Cluster, and a correlation of statistical significance between the galaxy filaments and TA events. We then present an astrophysical model to explain the origin of the hotspot and the deficit of TA events.
Speaker: Dr Jihyun Kim (UNIST)
181010-UHECR2018-jkim.pdf
Convener: Charles Jui
High energy cosmic ray interactions and UHECR composition problem 30m
I'll discuss the differences between contemporary Monte Carlo generators of high energy hadronic interactions and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs). In particular, key directions for model improvements will be outlined. The prospect for a coherent interpretation of the data in terms of the primary composition will be investigated.
Speaker: Dr Sergey Ostapchenko (Frankfurt Institute for Advanced Studies (FIAS))
ostapchenko-uhecr.pdf
Measurements and tests of hadronic interactions at ultra-high energies with the Pierre Auger Observatory 20m
Extensive air showers are complex objects, resulting of billions of
particle reactions initiated by single cosmic ray at ultra-high-energy.
Their characteristics are sensitive both to the mass of the primary
cosmic ray and to the details of hadronic interactions. Many of the
interactions that determine the shower features occur in energy and kinematic
regions beyond those tested by human-made accelerators.
We will report on the measurement of the proton-air cross
section for particle production at a center-of-mass energy per nucleon
of 39 TeV and 56 TeV. We will also show comparisons of post-LHC hadronic
interaction models with shower data by studying the
moments of the distribution of the depth of the electromagnetic maximum,
the number and production depth of muons in air showers, and finally a
parameter based on the rise-time of the surface detector signal,
sensitive to the electromagnetic and muonic component of the shower.
While there is good agreement found for observables based on the
electromagnetic shower component, discrepancies are observed for
muon-sensitive quantities.
Speakers: Dr Markus Roth (Karlsruhe Institute of Technology, Institut für Kernphysik, Karlsruhe, Germany), Dr Lorenzo Cazon (LIP, Lisbon)
2_Cazon_Roth-Hadronic Interactions.pdf
Hadronic interaction studied by TA 20m
Telescope Array (TA) is measuring ultra-high energy cosmic rays in the Northern hemisphere since 2008. Using hybrid detectors namely surface detector array (SD) and fluorescence telescopes (FD), TA can measure the lateral and longitudinal developments of extensive air showers, respectively, in detail. Recent analysis of SD data reveals the excess of muons at large distance from the shower core with respect to the MC predictions. Deep penetrating showers observed by FD were used to determine the p-air inelastic cross section. These measurements are useful to study the hadronic interaction beyond the energy of current accelerator reach. In this talk, we will review the TA results relevant to study the high-energy hadronic interaction.
Speaker: Takashi Sako (ICRR, University of Tokyo)
UHECR2018_TAhadron_sako.pdf
Report on tests and measurements of hadronic interaction properties with air showers 25m
Unambiguously determining the mass composition of ultra-high energy cosmic rays is a key challenge at the frontier of cosmic ray research. The mass composition is inferred from air shower observables using air shower simulations, which rely on hadronic interaction models. Current hadronic interaction models lead to varying interpretations, therefore tests of hadronic interaction models with air shower measurements are important. Such tests may even reveal new physics phenomena. Tests have been done by various experiments and cover the cosmic ray energies from PeV to tens of EeV. In this talk, the Working Group on Hadronic interactions and Shower Physics presents a summary of tests and measurements related to hadronic interactions in air showers from the Pierre Auger Observatory, Telescope Array, IceCube, KASCADE-Grande, EAS MSU, SUGAR and NEVOD-DECOR. Results include measurements of the proton-air cross-section, the lateral density profile of muons in air showers as well as electrons and photons, TeV muons, and the muon production depth. Our goal is to develop a consistent picture out of the individual measurements, to gain a detailed understanding where current hadronic interaction models succeed or fail in describing air shower observables.
dembinski_whisp_report.pdf
Prospects of testing an UHECR single source class model with the K-EUSO orbital telescope 3m
KLYPVE-EUSO (K-EUSO) is a planned orbital detector of ultra-high energy cosmic rays (UHECRs), which is to be deployed on board the International Space Station. K-EUSO is expected to have an almost uniform exposure over the celestial sphere and register from 120 to 500 UHECRs at energies above ~57 EeV in a 2-year mission. We employ the CRPropa3 package to estimate prospects of testing the UHECR single source class model by Kachelriess, Kalashev, Ostapchenko and Semikoz (2017) with K-EUSO in terms of the large-scale anisotropy. According to the simulations, K-EUSO will be able to probe the model in case it records ~200 or more events and the from-source flux constitutes ~20% of the whole data set.
Speaker: Mikhail Zotov (Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University)
M.Zotov-Testing.KKOS.model.with.K-EUSO-UHECR2018-poster.pdf
A novel method for the absolute end-to-end calibration of the Auger fluorescence telescopes. 3m
The fluorescence detector technique is using the atmosphere
as a calorimeter. Besides the precise monitoring of the parameters
of the atmosphere a proper knowledge of the optical properties in
the UV range of all optical components involved in the measurements
of the fluorescence light is vital.
Until now, the end-to-end calibration was performed with a 4.5 m^2 large,
uniformly lit light source attached to the aperture of the telescopes.
To improve the maintainability we propose an alternative setup where a
small and lightweight light source of known optical properties re-samples
the measurement of the big light source piece by piece. This will be
achieved by moving the light source based on an integrating sphere in
two dimensions in front of the aperture. A prototype setup has been
installed and we are currently in the phase of optimizing the parameters
of the system and the procedures. The outcome which we are aiming for is
to reduce the effort for the procedure without diminishing the quality
of the measurement.
First measurements with this setup have been already performed and the
measurements of the geometrical and optical properties of the
light source are an ongoing activity. We will present our calibration
scheme and the first, preliminary results.
Speaker: Hermann-Josef Mathes
UHECR2018-XYScanner_V2.pdf
Preliminary results of the AMIGA engineering array at the Pierre Auger Observatory 3m
The prototype array of the underground muon detector as part of the AMIGA enhancement was built and operated until November 2017. During this engineering phase, the array was composed of seven stations. The detector design as well as its performance for physics deliverables were validated and optimized. The most notable improvement was the selection of silicon photo-multipliers rather than photo-multiplier tubes as optical devices. It has been demonstrated that the detectors resemble the behavior of ideal poissonian counters. The counting efficiency for units of $10 \ \mathrm{m^2}$ was established to be $>98 \%$ for SiPM, and $83\%$ for PMT. The energy evolution of the muon densities measured with AMIGA shows a slope of $b = 0.90 \pm 0.04$, in accordance to the one expected for a constant composition. The full-sized underground muon detector array with 61 stations is foreseen to be completed by the end of 2019.
Speaker: Alvaro Taboada Nunez (IKP, KIT / ITeDA)
poster-impress.pdf
Average shape of longitudinal shower profiles measured at the Pierre Auger Observatory 3m
The average profiles of cosmic ray showers developing with traversed atmospheric depth are measured for the first time, with the Fluorescence Detectors at the Pierre Auger Observatory. The profile shapes are well reproduced by the Gaisser-Hillas parametrization, at the 1% level in a 500 g/cm2 interval around the shower maximum, for cosmic rays with log(E/eV) > 17.8. The results are quantified with two shape parameters, which are measured as a function of energy.
The average profiles carry information on the primary cosmic ray and its high energy hadronic interactions. The shape parameters predicted by the commonly used models are compatible with the measured ones within experimental uncertainties. These are dominated by systematic uncertainties which, at present, prevent a detailed composition analysis.
Speaker: Sofia Andringa (LIP)
poster_shape2018_v2.pdf
On the maximum energy of protons in the hotspots of AGN jets 3m
It has been suggested that relativistic shocks in extragalactic jets may accelerate the highest energy cosmic rays. The maximum energy to which particles can be accelerated via a diffusive mechanism depends on the magnetic turbulence near the shock but recent theoretical advances indicate that relativistic shocks are probably unable to accelerate particles to energies much larger than a PeV.
The cut-off of the synchrotron spectrum in the hotspots of powerful radiogalaxies is typically observed between infrared and optical frequencies, indicating that the maximum energy of non-thermal electrons accelerated at the jet termination shock is about 1 TeV for a canonical magnetic field of 100 micro Gauss. Based on theoretical considerations and observational data we show that the maximum energy of electrons cannot be constrained by synchrotron losses as usually assumed, unless the jet density is unreasonable large and most of the jet kinetic energy goes to non-thermal electrons. The maximum energy is ultimately determined by the ability to scatter particles downstream of the shock, and this limit applies to both electrons and protons. Therefore, the maximum energy of protons is also about 1 TeV. We show that non-resonant hybrid (Bell) instabilities generated by the streaming of cosmic rays can grow fast enough to amplify the jet magnetic field up to 100 micro Gauss and accelerate particles up to the maximum energies observed in the hotspots of radiogalaxies.
Speaker: Anabella Araudo
poster_CygnusA.pdf
Ultra-high-energy cosmic rays from supermassive black holes 3m
Mechanism of acceleration of charged particles to ultra-high energies above EeV up to ZeV still remains unsolved. Recent multimessenger observations strongly established the source of ultra-high-energy cosmic rays (UHECRs) being extragalactic supermassive black hole (SMBH). I will show that UHECRs can be produced within a neutron beta-decay in a dynamical environment of SMBHs located at the centers of galaxies. For this, I will present the super-efficient mechanism for the energy extraction from SMBHs. Magnetic fields which are usually present in the vicinity of black holes play a role of catalyzing element that increases the efficiency of the energy extraction. From the other hand synchrotron loses and back-reaction of individual charged particles put constraints on the mass of SMBH, magnetic fields and propagation distances of UHECRs.
Speaker: Arman Tursunov
Radio detection of cosmic rays with the Auger Engineering Radio Array 2m
The Auger Engineering Radio Array (AERA) complements the Pierre Auger Observatory with 150 radio-antenna stations measuring in the
frequency range from 30 to 80 MHz. With an instrumented area of 17 km^2, the array constitutes the largest cosmic-ray radio detector
built to date, allowing us to do multi-hybrid measurements of cosmic rays in the energy range of ~10^17 eV up to several 10^18 eV.
We give an overview of AERA results and discuss the significance of radio detection for the validation of the energy scale of
cosmic-ray detectors as well as for mass-composition measurements.
Speaker: Tim Huege (Karlsruhe Institute of Technology)
huege-aera-overview-v1.pdf
ATMOSPHERIC AEROSOL EFFECT ON FD DATA ANALYSIS AT THE PIERRE AUGER OBSERVATORY 1m
The atmospheric aerosol monitoring system of the Pierre Auger Observatory, initiated in 2004, continues to operate smoothly. Two laser facilities (Central Laser Facility, CLF and eXtreme Laser Facility, XLF) each fire sets of 50 laser shots four times per hour during Fluorescence Detector (FD) shifts.
The FD measures these UV laser tracks. Analysis of these tracks yields hourly measurements of the aerosol attenuation loads, expressed as Vertical Aerosol
Optical Depth (VAOD) profiles. These VAOD profiles, which may be highly variable, are used to correct the observed longitudinal UV light profiles of the Extensive Air Shower tracks detected by the FD. Two analysis techniques are used to obtain the VAOD profiles. The techniques been proven to be fully compatible. Measurement uncertainty of the VAOD profiles contribute to the measurement uncertainty of the reconstructed energy and depth at the maximum development of a shower (Xmax) of air shower events. To confirm the validity of the VAOD profiles applied to the FD event analysis, the flatness of the ratio
of reconstructed SD to FD energy as a function of the aerosol transmission to the depth of shower maximum has been verified to be at the level of 0.6%.
Speaker: Laura Valore (Universita' di Napoli Federico II)
Poster_Valore_UHECR2018.pdf
CORSIKA upgrade, plans and status 1m
Speaker: Ralf Ulrich (KIT)
Convener: Ralph Engel (Karlsruhe Institute of Technology)
LHC results 30m
Speaker: David d'Enterria (CERN)
dde_uhecr_paris_oct18.pdf
Probing the hadronic energy spectrum in proton air interactions through the fluctuations of the EAS muon content 20m
The average number of muons in air showers and its connection with the development of air showers has been studied extensively in the past. With the upcoming detector upgrades, UHECR observatories will be able to also probe higher moments of the muon distribution. Here we present a study of the physics of the fluctuations of the muon content. In addition to proving that the fluctuations must be dominated by the first interactions, we show that the fluctuations and entire shape of the distribution of the number of muons is determined by the energy spectrum of hadrons in the first interaction.
Speaker: felix riehn (LIP, Lisbon)
uhecr2018_181011_riehn_v2.pdf
EPOS 3 20m
With the recent results of large hybrid air shower experiments, it is clear that the simulations of the hadronic interactions are not good enough to obtain a consistent description of the observations. Even the most recent models tuned after the first run of LHC show significant discrepancy with air shower data. Since then many more data have been collected at LHC and lower energies which are not necessarily well described by these models. So before claiming any new physics explanation, it is necessary to have a model which can actually describe accelerator data in a very detailed way. That is the goal of EPOS 3, to understand both soft and hard particle production not only in light systems like proton-proton interactions but in heavy ions too. The latest results of the model will be presented and in particular the correlations between various observables which are very important to understand the real physical processes.
Speaker: Tanguy Pierog (KIT, IKP)
EPOS3_CR_Pierog.pdf
Recent results from the LHCf experiment 20m
The LHCf experiment aims for measurements of the forward neutral particles at an LHC interaction point to test hadronic interaction models which are widely used in cosmic-ray air-shower simulations. The LHCf had an operation with proton-proton collisions at the center of mass collision energy of 13 TeV in 2015. The LHCf detectors were composed of sampling and imaging calorimeters and they were installed at both sides of the ATLAS interaction point. We have measured the energy spectra of very forward photons and neutrons and these results will be reviewed in the presentation.
We also performed a joint analysis with the ATLAS experiment to measure the contribution of diffractive interactions on the forward photon production. In additions to operations at LHC, we had an operation at BNL-RHIC with proton-proton at 510 GeV collision energy to evaluate the energy scaling of forward particle production. These activities will be introduced in the presentation also.
Speaker: Hiroaki Menjo (ISEE, Nagoya University, Japan)
20181011_UHECR2018_Menjo.pdf
Convener: Piergiorgio Picozza Picozza (INFN and University of Rome Tor Vergata)
Overview of the Auger@TA project and preliminary results from Phase I 20m
Auger@TA is a joint experimental program of the Telescope Array experiment (TA) and the Pierre Auger Observatory (Auger), the two leading ultra-high energy cosmic-ray experiments located respectively in the northern and southern hemispheres. The aim of the program is to achieve a cross-calibration of the Surface Detector (SD) from both experiments. The first phase of this joint effort is currently underway and consists of comparing the response of two Auger and TA SD stations co-located at the TA central laser facility for a set of high-energy showers reconstructed by TA. The Auger and TA SD stations are based on different detection media and respond differently to the electromagnetic and muonic components of the shower. Hence, the study ultimately relies on comparing the signals induced in the SD stations with simulations using the shower parameters obtained by TA. Preliminary results will be presented. Phase II of the program consists of the deployment of a micro-array of six one-PMT Auger stations collocated with TA SD stations within TA, which will take data independently. In this phase, both station-level and event-level comparisons, including reconstruction parameters, can be performed for a subset of showers landing within the micro-array. We anticipate a deployment of the Auger micro-array in the first half of 2019.
Speakers: Fred Sarazin (Colorado School of Mines), and the Pierre Auger and Telescope Array Collaborations
Sarazin_UHECR2018.pdf
Air showers, hadronic models, and muon production. 20m
We report on a study about the mechanisms of muon production during the development of extended air showers initiated by ultra-high-energy cosmic rays. In particular, we analyze and discuss on the observed discrepancies between experimental measurements and simulated data.
Speaker: Sergio Sciutto (Departamento de Física - Universidad Nacional de La Plata - Argentina)
SjSciuttoAiresMuonsUHECR2018.pdf
Atmospheric Muons Measured with IceCube 20m
IceCube is a cubic-kilometer Cherenkov detector in the deep ice at the geographic South Pole. The dominant event yield is produced by penetrating atmospheric muons with energies above several 100 GeV. Due to its large detector volume, IceCube provides unique opportunities to study atmospheric muons with large statistics in great detail. Measurements of the energy spectrum and the lateral separation distribution of muons offer insights into hadronic interactions during the air shower development, for example, and can be used to test hadronic models. In addition, the surface detector IceTop provides information about the electromagnetic component of air showers. Together with muon measurements in the deep ice this can be used to derive the mass composition of cosmic rays.
We will present an overview of various measurements of atmospheric muons in IceCube, including the energy spectrum of muons between 10 TeV and 1 PeV. This is used to derive an estimate of the prompt contribution of muons, originating from the decay of heavy (mainly charmed) hadrons and unflavored mesons. We will also present measurements of the lateral separation distributions of TeV muons between 150 m and 450 m for several initial cosmic ray energies. In addition, studies on the seasonal variations of atmospheric muon fluxes in IceCube will be shown. Finally we will introduce new techniques to study the cosmic ray mass composition up to EeV energies. This hybrid approach uses muon measurements in the deep ice detector, together with information from the surface detector array.
Speakers: Dr Dennis Soldin (University of Delaware), for the Ice Cube collaboration
muons_in_IceCube_soldin.pdf
Results of the first orbital ultra-high-energy cosmic ray detector TUS in view of future space mission KLYPVE-EUSO 20m
The observation of ultra-high energy cosmic rays (UHECR) from Earth orbit relies on the detection of the UV fluorescence tracks of the extensive air shower (EAS). This technique is widely used by ground-based detectors. Analogous measurements from space will allow to achieve the largest instantaneous aperture for observation the whole sky with nearly homogeneous exposure. It is important for the efficient search for anisotropy, spectrum and composition of UHECR flux.
The first experience of UHECR measurements from space was obtained in the operation of the TUS (Tracking Ultraviolet Set-up) detector on board the Lomonosov satellite. It was launched on April 28, 2016. The TUS detector is a UV telescope with 2 m2 mirror and ±4.5° field of view. During two years of operation an important information on transient UV atmospheric emission which determine the conditions of measurements was obtained. The TUS trigger was optimized for EAS search but it has to operate in conditions of the atmosphere glow as of natural origin (aurora light, scattered moon light, thunderstorm) so of anthropogenic origin (city light). Search for EAS events in the TUS data and their analysis with an emphasis on a strong UHECR candidate registered on October 3, 2016 has been done. Conditions of the measurements were studied to exclude thunderstorm atmospheric events and anthropogenic sources. An arrival direction and energy of a primary particle have been estimated basing on results of simulations and new reconstruction algorithms.
KLYPVE-EUSO (K-EUSO) is the next step in the program for UHECR studies from space. It is a large fluorescence telescope has to be installed at the Mini-Research Module of the Russian Segment (RS) of the International Space Station (ISS). Recently the design studies were done by SINP MSU together with the JEM-EUSO collaboration based on the TUS experience and the collaboration expertise to enhance the instrument performance with improved detector of larger field of view. The optical design is based on the Schmidt telescope including the 4 m diameter carbon plastic mirror and the 2.5 m corrector lens, made of Poly Methyl Methacrylate (PMMA). This allows to increase the telescope field of view to 40 degrees The focal surface of K-EUSO is nearly identical to that of JEM-EUSO and consists of ~105 pixels with 1 mrad angular resolution. The launch of the experiment is scheduled to 2022 on the RS of the ISS for at least two years of operation.
Speaker: P. Klimov
4-Klimov_UHECR-2018.pdf
Results from the first missions of the JEM-EUSO program 20m
The origin and nature of Ultra-High Energy Cosmic Rays (UHECRs) remain unsolved in contemporary astroparticle physics. To give an answer to these questions is rather challenging because of the extremely low flux of a few per km^2 per century at extreme energies such as E > 5 × 10^19eV. The objective of the JEM-EUSO program, Extreme Universe Space Observatory, is the realization of a space mission devoted to scientific research of cosmic rays of highest energies. Its super-wide-field telescope will look down from space onto the night sky to detect UV photons emitted from air showers generated by UHECRs in the atmosphere.
The JEM-EUSO program includes different missions using fluorescence detectors to make a proof-of-principle of the UHECR observation from space and to raise the technological level of the instrumentation to be employed in a space mission. EUSO-TA, installed at the Telescope Array site in Utah in 2013, is in operation. It has already detected 9 UHECRs in coincidence with Telescope Array fluorescence detector at Black Rock Mesa. EUSO-Balloon flew on board a stratospheric balloon in August 2014. It measured the UV intensity on forests, lakes and the city of Timmins as well as proved the observation of UHECR-like events by shooting laser tracks. EUSO-SPB was launched on board a super pressure balloon on April 25th and flew for 12 days. It proved the functionality of all the subsystems of the telescope on a long term; observed the UV emission on oceans and has a self-trigger system to observe UHECRs with energy E > 3x10^18 eV. TUS, the Russian mission on board the Lomonosov satellite in orbit since April 28th 2016, is now included in the JEM-EUSO program and has detected so far in the UHECR trigger-mode a few interesting signals. Mini-EUSO is in its final phase of integration in Italy, where several performance tests are being held. Mini-EUSO will be installed inside the International Space Station (ISS) in late 2018 or early 2019.
During this contribution I will summarise the main results obtained so far by such missions and put them in prospect of future space detectors such as K-EUSO and POEMMA.
Speaker: Mario Bertaina (University & INFN Torino)
mario-uhecr2018_v2.pdf
Leading cluster approach to simulations of hadron collisions with GHOST generator 20m
We present the current version of generator GHOST which can be used in the simulation of Non Diffractive (ND),Non Single Diffractive (NSD), single diffractive (SD) and double diffractive (DD) events at cosmic ray energies.
The generator is based on four-gaussian parameterization of pseudorapidity distribution which is related to the leading cluster approach in distribution of secondary particles. Rapidity and pseudo-rapidity, the charged multiplicity distribution (as well as total multiplicity including neutral secondaries) is derived using the negative binomial distribution as used previously in ISR and LHC. The transverse momenta pt are taken from the power law distribution inserted in CORSIKA code.
Recently we updated the model using lately measured pseudorapidity distributions of neutrons , 's and photons by the group of LHCf.
Rapidity and pseudo-rapidity distributions, the average multiplicity and average central densities at √s= 0.2, 1, 7, 8, 13 TeV we obtained and we have provided guide lines for extrapolations above 14 TeV. We observed a possible convergence of the negative binomial distribution in the energy range 8-13 TeV in agreement with by the theoretical limit at ~√s = 40 TeV expected for the SSC …25 years ago.
Speaker: Jean-Noel Capdevielle (APC et IRFU CEA-Saclay)
Capdevielle_lead. clusterC.pdf
4:00 PM → 10:00 PM
Social Event Musée d'Orsay
Convener: Shoichi Ogio (Osaka City University)
Status and prospects of the TAx4 experiment 20m
The TAx4 experiment is a project to observe highest energy cosmic rays by expanding the detection area of the TA experiment with newly constructed surface detectors (SDs) and fluorescence detectors (FDs). The construction of both SDs and FDs is ongoing. New SDs are arranged in a square grid with 2.08 km spacing at the north east and south east of the TA SD array. Field of view of new FDs overlaps the detection area of new SDs to observe SD FD hybrid events. New SDs are planning to be deployed early next year. The first light with new FDs was already observed on February 2018. The prospects of the detectors will be shown in this talk. Especially the hotspot and energy spectrum anisotropy are expected to be understood in more detail from the implications obtained by the TA experiment. The expectation of the physics will be also shown in this talk.
Speaker: Dr Eiji Kido (Institute for Cosmic Ray Research, University of Tokyo)
Status_Prospects_TAx4SD_UHECR2018_open.pdf
AugerPrime: the Pierre Auger Observatory upgrade. 20m
The world largest exposure to ultra high energy cosmic rays accumulated by the Pierre Auger Observatory lead to major advances in our understanding of their properties, but the many unknowns about the nature and distribution of the sources, the primary composition and the underlying hadronic interactions prevent the emergence of a uniquely consistent picture.
The new perspectives opened by the current results call for an upgrade of the Observatory, which main aim is the collection of new information about the primary mass of the highest energy cosmic rays on a shower-by-shower basis.
The evaluation of the fraction of light primaries in the region of suppression of the flux will open the window to charged particle astronomy, allowing for composition-selected anisotropy searches. In addition, the properties of multiparticle production will be studied at energies not covered by man-made accelerators and new or unexpected changes of hadronic interactions will be searched for.
We present the AugerPrime upgrade, describing the new plastic scintillator detectors on top of the water-Cherenkov detectors of the surface array (SD), the new SD electronics and the extension of the dynamic range with an additional PMT installed in the water-Cherenkov detectors. We discuss the expected performances and the improved physics sensitivity of the upgraded detectors and present the first data collected with the already running Engineering Array.
Speaker: Antonella Castellina (INFN & INAF-OATo)
AugerPrime_UHECR2018.pdf
A next-generation ground array for the detection of ultrahigh-energy cosmic rays: the Fluorescence detector Array of Single-pixel Telescopes (FAST) 20m
The origin and nature of ultrahigh-energy cosmic rays (UHECRs) is one of the most intriguing mys- teries in astroparticle physics. The two largest observatories currently in operation, the Telescope Array Experiment in central Utah, USA, and the Pierre Auger Observatory in western Argentina, have been steadily observing UHECRs in both hemispheres for over a decade. We highlight the latest results from both of these experiments, and address the requirements for a next-generation UHECR observatory. The Fluorescence detector Array of Single-pixel Telescopes (FAST) is a design concept for a next-generation UHECR observatory, addressing the requirements for a large-area, low-cost detector suitable for measuring the properties of the highest energy cosmic rays with an unprecedented aperture. We have developed a full-scale prototype consisting of four 200 mm photomultiplier-tubes at the focus of a segmented mirror of 1.6 m in diameter. In October 2016 and September 2017 we installed two such prototypes at the Black Rock Mesa site of the Telescope Array Experiment. Both telescopes have been steadily taking data since installation. We report on preliminary results of the full-scale FAST prototypes, including measurements of artificial light sources, distant ultra-violet lasers, and UHECRs. Futhermore, we discuss our plan to install an additional identical FAST prototype at the Pierre Auger Observatory. Possible benefits to the Telescope Array and the Pierre Auger Observatory include a comparison of the transparency of the atmosphere above both experiments, a study of the systematic uncertainty associated with their existing fluorescence detectors, and a cross-calibration of their energy and $X_{max}$ scales.
Speaker: Toshihiro Fujii (ICRR, University of Toyo)
181012_FAST_UHECR2018.pdf
Detection of ultra-high energy cosmic ray air showers by Cosmic Ray Air Fluorescence Fresnel-lens Telescope for next generation 20m
In the future, ultra-high energy cosmic ray (UHECR) observatory will be expanded due to the small flux. Then, cost reduction is useful strategy to realize a huge scale observatory. For this purpose, we are developing a simple structure cosmic ray detector named as Cosmic Ray Air Fluorescence Fresnel-lens Telescope (CRAFFT). We deployed CRAFFT detectors at the Telescope Array site and performed the test observation. We have successfully observed UHECR air showers. We will report the status and the result of the test observation.
Speaker: Dr Yuichiro Tameda (Osaka Electro-Communication University)
UHECR2018-YTameda-CRAFFT.pdf
Precision measurements of cosmic rays up to the highest energies with a large radio array at the Pierre Auger Observatory 20m
High-energy cosmic rays impinging on the atmosphere of the Earth induce cascades of secondary particles, the extensive air showers. Many particles in the showers are electrons and positrons. Due to interactions with the magnetic field of the Earth they emit radiation with frequencies of several tens of MHz. In the last years huge progress has been achieved in this field through strong activities of various groups. The radio technique is now routinely applied to measure the properties of cosmic rays, such as their arrival direction, their energy, and their particle type/mass.
Horizontal air showers have a large footprint of the radio emission on the ground and they can be detected with sparse arrays with kilometer-scale spacing. With the Auger Engineering Radio Array (AERA) horizontal air showers are measured. Recent results will be presented. These measurements clearly demonstrate the feasibility to measure horizontal air showers with the radio technique. Ideas will be outlined to install radio antennas on all surface detector stations of the Pierre Auger Observatory in order to measure the properties of cosmic rays (in particular their particle type/mass) up to energies exceeding 10^20 eV.
Speaker: Dr Jörg Hörandel (Radboud University Nijmegen)
jrh-pao-rd.pdf
Sessions: future Friedel Amphitheater
Convener: M. Panasyuk
In-ice radio arrays for the detection of ultra-high energy neutrinos 20m
Radio techniques show the most promise for measuring and characterizing
the astrophysical neutrino flux above about 10^17 eV. Complementary strategies include observing a target volume from a distance and deploying sensors in the target volume itself. I will focus on the current status of experiments utilizing the latter strategy, in-ice radio arrays. I will give an overview of results from the past fifteen years of experience and the status of developing plans for the future. I will preview what we might expect from in-ice arrays in terms of astrophysics and particle physics results in the next ten years
UHECR18_inice_Connolly.pdf
The GRAND Project 20m
The Giant Radio Array for Neutrino Detection (GRAND) aims at detecting ultra-high-energy extraterrestrial neutrinos via the extensive air showers induced by the decay of tau leptons created in the interaction of neutrinos under the Earth's surface. Consisting of an array of $\sim200\,000$ radio antennas deployed over $\sim200\,000\,$km$^2$, GRAND plans to reach, for the first time, a sensitivity of $\sim10^{-10}\,{\rm GeV}\,{\rm cm}^{-2}\,{\rm s}^{-1}\,{\rm sr}^{-1}$ above $5\times10^{17}$ eV and a sub-degree angular resolution, beyond the reach of other planned detectors. In this talk, we will show preliminary designs and simulation results, plans for the ongoing, staged approach to construction, and the rich research program made possible by the proposed sensitivity and angular resolution.
Speaker: Olivier Martineau (IN2P3)
UHECR18.pptx
The space road to UHECR observations: challenges and expected rewards 20m
Significant progress has been made in the last decade in the
field of Ultra-High-Energy Cosmic Rays (UHECRs), thanks to the operation
of large ground-based detectors and to the renewed theoretical interest
that they triggered. While multi-messenger astronomy is rapidly developing
worldwide, the sources of the charged messengers, namely the cosmic rays,
are still to be determined, and the acceleration process to be understood.
Even at the highest energies, the particle deflections by intervening
magnetic fields appear to be too large to allow direct identification, at
least with the current statistics. Concentrating on the highest energies
has the advantage to reduce the number of sources, thanks to the so-called
GZK effect, and thus potentially reduce source confusion. However, the
very low flux of UHECRs in the GZK range requires huge detectors to be
deployed. Alternatively, a single instrument could be used to
significantly increase the current statistics if it could be operated from
space, looking down to the nadir to detect the fluorescence light of
UHECR-induced air showers over a huge volume of atmosphere. In addition,
such a space mission would cover the whole celestial sphere and allow to
draw the first complete map of UHECRs with essentially uniform coverage
and systematics. Such a program has been undertaken by the JEM-EUSO
Collaboration. In this talk, I will review the major steps taken along
this road to space, including ground-based experiments and
balloon flights. I will also present the future planned missions, on a
super-pressure balloon as well as in/on the ISS. Finally, I will discuss
the current efforts to extend the science case of such a mission to
high-energy neutrino astronomy, complementing the ground-based neutrino
detectors, and address the interesting issue of high-altitude UHECR
shower, which only a space mission could observe.
Speaker: Etienne Parizot (APC - University Paris 7)
UHECR2018_Paris_parizot.pdf
POEMMA: Probe Of Multi-Messenger Astrophysics 20m
Developed as a NASA Astrophysics Probe mission concept study, the Probe Of Multi-Messenger Astrophysics (POEMMA) science goals are to identify the sources of ultra-high energy cosmic rays (UHECRs) and to observe cosmic neutrinos above 10 PeV. POEMMA consists of two satellites flying in loose formation at 525 km altitudes. A novel focal plane design is optimized to observe the UV air fluorescence signal in a stereoscopic UHECR observation mode and the Cherenkov signals from air showers from UHECRs and neutrino-induced tau leptons in an Earth-limb viewing mode. POEMMA is designed to achieve full-sky coverage and significantly higher sensitivity to the highest energy cosmic messengers compared to what have been achieved so far by ground-based experiments. POEMMA will measure the spectrum, composition, and full sky distribution of the UHECRs above 10 EeV to identify the most energetic cosmic accelerators in the universe and study the acceleration mechanism(s). POEMMA will also have high sensitivity to cosmogenic neutrinos by observing the upward-moving air showers induced from tau neutrino interactions in the Earth. POEMMA will also be able to re-orient to a Target-of-Opportunity (ToO) neutrino mode to view transient astrophysical sources. In this talk, the science goals, instrument design, launch and mission profile, and simulated UHECR and neutrino measurement capabilities for POEMMA will be presented.
Speaker: Dr John Krizmanic (CRESST/NASA//GSFC/UMBC)
POEMMA-UHECR2018-Krizmanic.pdf
Closing and Concluding Remarks 5m
Speaker: Ralph Engel (Karlsruhe Institute of Technology)
Mini Worshop on Future of UHECR Institut Henri Poncaré ( IHP)
Institut Henri Poncaré ( IHP)
Conveners: Ralph Engel (Karlsruhe Institute of Technology), Shoichi Ogio (Osaka City University)
Introduction 5m
Speakers: Ralph Engel (Karlsruhe Institute of Technology), Shoichi Ogio (Osaka City University)
Status and open problems in ultrahigh-energy cosmic ray and neutrino physics 35m
Speaker: Paolo Lipari
01 lipari_uhecr2018.pdf
Origin of UHECR anisotropies and what we can learn from them 10m
Speaker: Prof. Günter Sigl (University of Hamburg)
02 Sigl-anisotropies.pdf
Mixed composition and the chances of finding UHECR sources 10m
03 Unger-uhcerFuture.pdf
Towards a Global Cosmic Ray Observatory (GCOS) - requirements for a future observatory 10m
Speakers: Ralph Engel (Karlsruhe Institute of Technology), Andreas Haungs (KIT), Dr Markus Roth (Karlsruhe Institute of Technology, Institut für Kernphysik, Karlsruhe, Germany)
04 GCOS-Engel-UHECR2018.pdf
A giant air shower detector 10m
05 jrh-future.pdf
Layered surface detector (10 min) 10m
Speaker: Ioana Maris (Universitaet und Forschungszentrum Karlsruhe)
06 IoanaMaris_Segmented.pdf
Discussion time 30m
Coffee break 20m Friedel Amphitheater
Mini Worshop on Future of UHECR Hermite Amphitheater (IHP)
Hermite Amphitheater
Plans for GRAND 200k 10m
Speaker: Kumiko Kotera (Institut d'Astrophysique de Paris)
18_10_12_UHECR_kotera.pdf
A "snake array" of fluorescence detectors (10 min) 10m
Speaker: Pierre Sokolsky (University of Utah)
08 Sokolsky-snake.pdf
SKA with muon counters as super-cosmic-ray detector in the transition energy region 10m
09 Huege-futureworkshop-v1.pdf
Lower energy TALE, down to 10^14 eV 10m
10 jui-transition.pdf
On the importance of analyzing very-high and ultra-high energy data together, towards a new working group for UHECR symposia 10m
11 Haungs-newWG-UHECR18.pdf
Discussion 20m | CommonCrawl |
How close would Earth have to be for us to detect it was habitable, and then inhabited?
Given our current technology (or technology that is near implementation), how close would a clone of our Solar System (and so also Earth) have to be to us in order to detect that the cloned Earth was in habitable, and also how close would we have to be to detect that there is life on the planet (excluding radio signals the cloned humans have broadcast into space)?
Basically I'm asking if we assume the worst case scenario where life only exists on Earth-like planets, and that the life is the same as ours (ie is inteligent, builds cities, etc), at what range with our current technology and methods/techniques would we be unable to detect a planet and civilisation the same as our own (seeing as it's the only civilisation we know about)?
EDIT: Another way to put this, assume every star system is identical to the Solar System, using our current technology/techniques what is the furthest planet we could "see" that is habitable, and what is the furthest planet we could "see" that is inhabited by a species identical to our own (so the identical Earth 500 lightyears away would actually be in the year 2513, so we'd "see" it in 2013).
astronomy exoplanets
Jonathan.
Jonathan.Jonathan.
$\begingroup$ If the civilization is intelligent, we can expect it to emit radio signals. Detecting these radio signals could prove their existence. However, there would be no way to prove that the civilization was like ours, without visiting them. $\endgroup$ – Pranav Hosangadi Dec 10 '13 at 23:38
$\begingroup$ I would think the methane and carbon dioxide in out atmosphere would be a give-away that Earth is habitable. Methane has a pretty short half-life in our atmosphere due to interaction with solar wind and cosmic rays so we have to constantly replenish it with life. Perhaps our atmospheric absorption spectrum gives us away from many lightyears? $\endgroup$ – Brandon Enright Dec 11 '13 at 0:07
$\begingroup$ @BrandonEnright Venus has carbon dioxide and Mars also has methane. Those do not necessarily indicate habitability of the Earth kind. Another indicator is large amount of oxygen. $\endgroup$ – user23660 Dec 11 '13 at 2:42
$\begingroup$ Going with spectra in general is a bad idea because you have to be able to pick the planet's spectrum out of the host star's spectrum. Our current level of technology cannot do this. $\endgroup$ – Kyle Kanos Dec 11 '13 at 3:41
$\begingroup$ Note that the answer to this question would depend heavily on whether the cloned solar system was face-on or edge-on to us, and whether cloned-Earth transits were visible. An edge-on system permits a mass measurement on the planet, instead of just a lower bound. A transiting system gives the diameter and even spectral characteristics. A face-on system is much harder to observe. $\endgroup$ – Emilio Pisanty Dec 11 '13 at 14:45
If we are assuming that we are restricted to observing them via light only, then we can use the angular resolution relation, $$ \theta\approx1.22\frac{\lambda}{D} $$ where $\theta$ is the angular resolution, $\lambda$ the wavelength observed, and $D$ the diameter of the aperture. Note that this only applies to optical and radio telescopes.
In order to observe a planet with such a telescope, we'd want something like nano-arcsecond resolution at a wavelength of $\lambda\sim500\,{\rm nm}$. This would give us an aperature diameter of about 600 meters, which simply doesn't exist.
We possibly could use something like the Very Long Baseline Array, but despite the separation of many km, even that has an angular resolution in the milli-arcsecond range in the needed wavelength range. Perhaps locating one on the moon and/or on Mars might give us the needed distance for the desired resolution?
Kyle KanosKyle Kanos
$\begingroup$ You don't necessarily need to spatially resolve it though. If the planet passes behind the star you can look at the (tiny) changes to the spectrum when this happens. (I believe the low-resolution spectra we already have are produced this way.) There are also changes in the spectra due to the planet's rotation. Resolving these is a huge technical challenge but people are working on it. Polarimetry is also a possibility, since light reflected from a planet is polarised differently from direct light from the star. $\endgroup$ – Nathaniel Dec 11 '13 at 6:09
I had put off answering this question because it seems too broad without specifying the proposed detection methods, and excluding radio signals seems crazy (so I have considered them). If we were to take the solar system and put it at the distance of the nearest other star, at present, it is unlikely we would we able to detect signs of life on planet Earth.
No planets like the Earth have yet been detected around another star. That is to say, none that have a similar mass, radius and orbit at 1 au (or close to it) from a solar-type star. With current technology, it is just out of reach. Therefore any directed search for life on Earth wouldn't actually know where to start. If you can't detect the planet at all then there is absolutely no chance of looking at its atmospheric composition to look for biomarkers (e.g. oxygen along with a reducing gas like methane, or chlorofluorocarbons from an industrial civilisation - Lin et al. 2014). The only exoplanets for which atmospheric compositions have been (crudely and tentatively) measured are "hot Jupiters". - giant exoplanets orbiting very close to their parent stars.
A "blind" search could look for radio signatures and of course this is what SETI has been doing. If we are talking about detecting "Earth", then we must assume that we are not talking about deliberate beamed attempts at communication, and so must rely on detecting random radio "chatter" and accidental signals generated by our civilisation. The SETI Phoenix project was the most advanced search for radio signals from other intelligent life. Quoting from Cullers et al. (2000): "Typical signals, as opposed to out strongest signals fall below the detection threshold of most surveys, even if the signal were to originate from the nearest star". Quoting from Tarter (2001): "At current levels of sensitivity, targeted microwave searches could detect the equivalent power of strong TV transmitters at a distance of 1 light year (within which there are no other stars)...". The equivocation in these statements is due to the fact that we do emit stronger beamed signals in certain well-defined directions, for example to conduct metrology in the solar system using radar. Such signals have been calculated to be observable over a thousand light years or more. But these signals are brief, beamed into an extremely narrow angle and unlikely to be repeated. You would have to be very lucky to be observing in the right direction at the right time if you were performing targeted searches.
Hence my assertion that with current methods and telescopes there is not much chance of success. But of course technology advances and in the next 10-20 years there may be better opportunities.
The first step in a directed search would be to find planets like Earth. The first major opportunity will be with the TESS spacecraft, launching in 2017, capable of detecting earth-sized planets around the brightest 500,000 stars. However, it's 2-year mission would limit the ability to detect an Earth-analogue. The best bet for finding other Earths will come later (2024 perhaps) with the launch of Plato, a six-year mission that again, studies the brightest stars. However, there is then a big leap forward required to perform studies of the atmospheres of these planets. Direct imaging and spectroscopy would probably require space-borne nulling interferometers; indirect observations of phase-effects and transmission spectroscopy through an exoplanet atmosphere does not require great angular resolution, just massive precision and collecting area. Spectroscopy of something the size of Earth around a normal star will probably require a bigger successor to the James Webb Space Telescope (JWST - launch 2018), or even more collecting area than will be provided by the E-ELT in the next decade. For example Snellen (2013) argues it would take 80-400 transits-worth of exposure time (i.e. 80-400 years!) to detect the biomarker signal of an Earth-analogue with the E-ELT!
It has been suggested that new radio telescope projects and technology like the Square Kilometre Array may be capable of serendipitously detecting radio "chatter" out to distances of 50 pc ($\sim 150$ light years) - see Loeb & Zaldarriaga (2007). This array, due to begin full operation some time after 2025 could also monitor a multitude of directions at once for beamed signals. A good overview of what might be possible in the near future is given by Tarter et al. (2009).
Rob JeffriesRob Jeffries
Thanks for contributing an answer to Physics Stack Exchange!
Not the answer you're looking for? Browse other questions tagged astronomy exoplanets or ask your own question.
Stepping down and taking a break
Solar System Capture of Orphan Planets
Is there such a thing as "North" in outerspace?
Spectroscopy and the current state of our ability to determine the composition of extra-solar planets
What would we see if we looked at our Solar system from 2,000 light years away with our current technology?
From how far away could Earth's telescopes detect Earth like radio signals? | CommonCrawl |
Estimating eigenvalues of an anisotropic thermal tensor from transient thermal probe measurements
On the manifold of closed hypersurfaces in $\mathbb{R}^n$
November 2013, 33(11&12): 5429-5440. doi: 10.3934/dcds.2013.33.5429
Integration with vector valued measures
M. M. Rao 1,
Unversity of California, Riverside, Riverside, CA 92521, Uruguay
Received August 2011 Published May 2013
Of the many variations of vector measures, the Fréchet variation is finite valued but only subadditive. Finding a `controlling' finite measure for these in several cases, it is possible to develop a useful integration of the Bartle-Dunford-Schwartz type for many linear metric spaces. These include the generalized Orlicz spaces, $L^{\varphi}(\mu)$, where $\varphi$ is a concave $\varphi$-function with applications to stochastic measures $Z(\cdot)$ into various Fréchet spaces useful in prediction theory. In particular, certain $p$-stable random measures and a (sub) class of these leading to positive infinitely divisible ones are detailed.
Keywords: independently valued measures, Vector measures in Fréchet spaces, prediction theory., stochastic measures in generalized Orlicz spaces, controling measures.
Mathematics Subject Classification: Primary: 46G10, 60H05; Secondary: 28B05, 28C2.
Citation: M. M. Rao. Integration with vector valued measures. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 5429-5440. doi: 10.3934/dcds.2013.33.5429
S. Bochner, "Harmonic Analysis and the Theory of Probability,", University of California Press, (1956). Google Scholar
N. Dunford and J. T. Schwartz, "Linear Operators, Part I: General Theory,", Wiley-Interscience, (1958). Google Scholar
P. L. Duren, "Theory of $H^p$ Spaces,", Academic Press, (1970). Google Scholar
W. Feller, "An Introduction to Probability Theory and its Applications, Vol. 2,", Wiley, (1966). Google Scholar
D. J. H. Garling, Non-negative random measures and order preserving embeddings,, J. London Math. Soc. (2), 11 (1975), 35. doi: 10.1112/jlms/s2-11.1.35. Google Scholar
S. Kakutani, Über die Metrisation der topologischen Grouppen,, Proc. Imp. Acad. Tokyo, 12 (1936), 82. doi: 10.3792/pia/1195580206. Google Scholar
N. J. Kalton, N. T. Peck and J. W. Roberts, $L^0$-valued vector measures are bounded,, Proc. Amer. Math. Soc., 85 (1982), 575. doi: 10.2307/2044069. Google Scholar
V. L. Klee, Invariant metrics in groups:(Solution of a problem of Banach),, Proc. Amer. Math. Soc., 3 (1952), 484. doi: 10.1090/S0002-9939-1952-0047250-4. Google Scholar
T. V. Panchapagesan, "The Bartle-Dunford-Schwartz Integral,", Birkhäuser Verlag AG, (2008). Google Scholar
A. Prékopa, On stochastic set functions, I-III,, Acta Math. Acad. Sci. Hungary, 8 (1956), 215. doi: 10.1007/BF02020323. Google Scholar
M. M. Rao, Random measures and applications,, Stochastic Anal. Appl., 27 (2009), 1014. doi: 10.1080/07362990903136546. Google Scholar
M. M. Rao, "Random and Vector Measures,", World Scientific, (2012). Google Scholar
M. M. Rao, "Measure Theory and Integration,", Wiley-Interscience, (1987). Google Scholar
M. M. Rao and Z. D. Ren, "Theory of Orlicz Spaces,", Marcel Dekker, (1991). Google Scholar
M. M. Rao and Z. D. Ren, "Applications of Orlicz Spaces,", Marcel Dekker, (2002). doi: 10.1201/9780203910863. Google Scholar
S. Rolewicz, "Metric Linear Spaces,", Warsaw, (1972). Google Scholar
I. Shragin, "Superpositional Measurability and Superposition Operator, (Selected Themes),", Odessa, (2007). Google Scholar
M. S. Steigerwalt and A. J. White, Some function spaces related to $L_p$,, Proc. London Math. Soc., 22 (1971), 137. doi: 10.1112/plms/s3-22.1.137. Google Scholar
M. Talagrand, Les mesures vectorielles a valuers dans $L^0$ sont bournées,, Ann. Sci. Ècole Norm. asup., 14 (1981), 445. Google Scholar
K. Urbanik, Some prediction problems for strictly stationary processes,, Proc. 5th Berkely Symp. Math. Statist. and Prob., 2, part 1 (1967), 235. Google Scholar
V. M. Zolotarev, "One Dimensional Stable Distributions,", Translatios A.M.S., 65 (1986). Google Scholar
Barry Simon. Equilibrium measures and capacities in spectral theory. Inverse Problems & Imaging, 2007, 1 (4) : 713-772. doi: 10.3934/ipi.2007.1.713
Guizhen Cui, Yunping Jiang, Anthony Quas. Scaling functions and Gibbs measures and Teichmüller spaces of circle endomorphisms. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 535-552. doi: 10.3934/dcds.1999.5.535
Zeng Lian, Peidong Liu, Kening Lu. Existence of SRB measures for a class of partially hyperbolic attractors in banach spaces. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3905-3920. doi: 10.3934/dcds.2017164
Moisey Guysinsky, Serge Yaskolko. Coincidence of various dimensions associated with metrics and measures on metric spaces. Discrete & Continuous Dynamical Systems - A, 1997, 3 (4) : 591-603. doi: 10.3934/dcds.1997.3.591
J. Alberto Conejero, Marko Kostić, Pedro J. Miana, Marina Murillo-Arcila. Distributionally chaotic families of operators on Fréchet spaces. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1915-1939. doi: 10.3934/cpaa.2016022
G. Dal Maso, Antonio DeSimone, M. G. Mora, M. Morini. Time-dependent systems of generalized Young measures. Networks & Heterogeneous Media, 2007, 2 (1) : 1-36. doi: 10.3934/nhm.2007.2.1
Sylvain De Moor, Luis Miguel Rodrigues, Julien Vovelle. Invariant measures for a stochastic Fokker-Planck equation. Kinetic & Related Models, 2018, 11 (2) : 357-395. doi: 10.3934/krm.2018017
Vasso Anagnostopoulou. Stochastic dominance for shift-invariant measures. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 667-682. doi: 10.3934/dcds.2019027
Michihiro Hirayama. Periodic probability measures are dense in the set of invariant measures. Discrete & Continuous Dynamical Systems - A, 2003, 9 (5) : 1185-1192. doi: 10.3934/dcds.2003.9.1185
Alexander V. Kolesnikov. Hessian metrics, $CD(K,N)$-spaces, and optimal transportation of log-concave measures. Discrete & Continuous Dynamical Systems - A, 2014, 34 (4) : 1511-1532. doi: 10.3934/dcds.2014.34.1511
Xu Zhang. Sinai-Ruelle-Bowen measures for piecewise hyperbolic maps with two directions of instability in three-dimensional spaces. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2873-2886. doi: 10.3934/dcds.2016.36.2873
Xin Li, Wenxian Shen, Chunyou Sun. Invariant measures for complex-valued dissipative dynamical systems and applications. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2427-2446. doi: 10.3934/dcdsb.2017124
Siegfried Carl, Christoph Tietz. Quasilinear elliptic equations with measures and multi-valued lower order terms. Discrete & Continuous Dynamical Systems - S, 2018, 11 (2) : 193-212. doi: 10.3934/dcdss.2018012
S. Astels. Thickness measures for Cantor sets. Electronic Research Announcements, 1999, 5: 108-111.
François Golse, Clément Mouhot, Valeria Ricci. Empirical measures and Vlasov hierarchies. Kinetic & Related Models, 2013, 6 (4) : 919-943. doi: 10.3934/krm.2013.6.919
Moshe Marcus. Remarks on nonlinear equations with measures. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1745-1753. doi: 10.3934/cpaa.2013.12.1745
Azmy S. Ackleh, Rinaldo M. Colombo, Sander C. Hille, Adrian Muntean. Preface to ``Modeling with Measures". Mathematical Biosciences & Engineering, 2015, 12 (2) : i-ii. doi: 10.3934/mbe.2015.12.2i
Alexander I. Bufetov. Infinite determinantal measures. Electronic Research Announcements, 2013, 20: 12-30. doi: 10.3934/era.2013.20.12
Valentin Afraimovich, Lev Glebsky, Rosendo Vazquez. Measures related to metric complexity. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1299-1309. doi: 10.3934/dcds.2010.28.1299
Ilie Ugarcovici. On hyperbolic measures and periodic orbits. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 505-512. doi: 10.3934/dcds.2006.16.505
M. M. Rao | CommonCrawl |
Math SAC
Open Educational Resources Information
Calculus Lab Manuals
Created by Alex Jordan, last modified by Kandace Kling on Dec 04, 2019
Calculus I and II is taught at Portland Community College using a lecture/lab format.The laboratory time is set aside for students to investigate the topics and practice the skills that are covered during their lecture periods. These lab manuals serves as guides for the laboratory component of these courses.
HTML and PDF
Each manual has been released with several synced versions that offer different features.The essential content of each version is the same as for all others.
Whenever there is an internet connection, and you do not prefer to have a print copy, this version is most recommended. It offers interactive elements and easier navigation than a PDF could offer.
http://spot.pcc.edu/math/clm
The web version offers full walk-through solutions to supplemental problems.
http://faculty.gvsu.edu/boelkinm/Home/AC/index.html
This is the full eBook for Active Calculus. Our lab uses select sections of this book. Additionally our lab has a supplement which is only available in the PDF.
For-printing PDF
http://spot.pcc.edu/math/clm/clm-print.pdf
To save on printing expense, this version is mostly black-and-white, and only offers short answers to the supplemental exercises (as opposed to full solutions).
Use the color PDF and print in grayscale.
Color PDF
http://spot.pcc.edu/math/clm/clm-print-color.pdf. http://spot.pcc.edu/math/ActiveCalculus/ActiveCalculusWithPCCSupplement.pdf
Knowls
The HTML versions makes extensive use of "knowls". A knowl is similar to a link, except that instead of transporting you to a different location or a different page, the requisite information is brought to you as hidden content that is revealed. As you explore the HTML version, try clicking on knowl links that you see.
Copying Graphs, Tables, and Math Content
In the MTH 251 manual, the graphs and other images that appear may be copied in various file formats using the HTML version. Below each image are links to .png, .eps, .svg, .pdf, and .tex files that contain the image. The .eps, .svg, and .pdf files will not lose sharpness no matter how much you zoom, but typically are large files. Some of these formats may not be recognized by applications that you use. The .png file are of fairly high resolution, but will eventually lose sharpness if you zoom in too much. The .tex files contain code that can be inserted into other .tex documents to re-create the images.
The MTH 252 manual images are currently only available as svg images from the HTML version. For now if you need one of these images, it's easiest just to take a screen shot.
In both manuals, mathematical content can be copied from the HTML version. To copy math content into MS Word, right-click or control-click over the math content, and click to Show Math As MathML Code. Copy the resulting code, and Paste Special into Word. In the Paste Special menu, paste it as Unformatted Text. To copy math content into LaTeX source, right-click or control-click over the math content, and click to Show Math As TeX Commands.
Tables can be copied from the HTML version and pasted into applications like MS Word. However,
Their decorations like horizontal and vertical lines might not carry over. These can be added back.
Mathematical content within tables will not always paste correctly without a little extra effort as described below.
The HTML version is intended to meet or exceed all web accessibility standards.If you encounter an accessibility issue, please report it to the editor.
All graphs and images should have meaningful alt text that communicates what a sighted person would see, without necessarily giving away anything that is intended to be deduced from the image.
All math content is rendered using MathJax.MathJax has a contextual menu that can be accessed in several ways, depending on what operating system and browser you are using. The most common way is to right-click or control-click on some piece of math content.
In the MathJax contextual menu, you may set options for triggering a zoom effect on math content, and also by what factor the zoom will be.
If you change the MathJax renderer to MathML, then a screen reader will generally have success verbalizing the math content.
Tablets and Smartphones
PreTeXt documents like this lab manual are "mobile-friendly". The display adapts to whatever screen size or window size you are using. A math teacher will always recommend that you do not study from the small screen on a phone, but if it's necessary, this manual gives you that option.
Ordering Print Copies
For MTH 251, the print center order number for the PDF is #23660.
(Older editions were #23520, #23337.)
Major Changes in Content
With the MTH 251 Manual, some changes to the content have been introduced with the release of the PreTeXt version. The most notable changes are:
The numbering scheme does not match the earlier numbering scheme.This was a necessary consequence of converting to PreTeXt.
The related rates lab has been rewritten from scratch using the DREDS approach. Both quantity and rate variables are explicitly defined at the beginning of each problem.
In the implicit differentiation lab, a section on logarithmic differentiation has been added.
The printed version only contains short answers to the supplemental questions rather than complete walk-through solutions.However, complete solutions may still be found in the HTML version and the screen PDF version.
In the continuity section of the lab manual, some of the problems related to continuity are proceeded by tables that help students organize their thoughts around appropriate function values and limit values.
There is an appendix on units of measure that spells out the nature and abbreviation for all of the quantities units used In the text. The rate units are mentioned in the appendix as that is part of the content of the course.
The very first activity looks very different in the first two steps. The questions require unit analysis which cannot be performed without units on the constants, so the units are there. After the first two problems there is a transition explaining why units are going away (other than for conclusion purposes) and away they go.
A few problems here and there have either changed, disappeared, or been introduced.
Bug/Typo/Suggestion Reports
If you are PCC faculty and you find something about the lab manual that you think should be changed, whether it be a typo, a mathematical error, rewording of a sentence, restructuring of an image, or whatever...you can list it here. I (Alex) will come through here periodically and work through these. Log yourself into Spaces using the Login button at the top right if you are not already logged in. (After logging in, you might be taken to the Spaces home page. To get back here, the simplest thing to do is to just follow whatever link or navigating took you to this page in the first place.) Then find the button near the top of the page to Edit this page.If you are part time, then you may need to send me an email to add you to the list of people who can edit Math SAC pages in Spaces first. Click inside the table, and use the menu at the top to add a row.
3/30/16. Fixed in source.
Where.
Specifying a section name (like "Velocity" for Lab 1 Activity 1) is much more helpful than a section number or a page number. You can refer to other things like exercises by number.
What to change.
Please be specific if you can. Tell me to replace "this" with "that". Or if it is a problem with the behavior of the web version, be very clear about what the issue is, and also tell me what browser you are using, what version number, and what operating system.
Leave blank and Alex will update with notes about applying these changes.
In Graphical Derivatives, exercise #16 A graph has horizontal asymptotes pointing in the wrong direction. 9/8/2015. Fixed in source code. Will make it to print for AY 2016/7.
In Graphical Derivatives, exercise #16 Please include the entire line as the asymptote.
12/7/15. I have plans to create a line style that starts off sparse on one end, and gets closer to solid on the other, where the arrow would point. I think this would be best. I just need some free time to get it there.
11/3/2016. Extending to full-width lines; leaving the arrows one-directional.
In Graphical Derivatives, exercise #11-15 The asymptotes are missing on Figure 4.2.8. This one is tough since the axes are the asymptotes, but could we at the very least get the labels for the asymptotes on this figure? 10/5/2015. Added asymptotes and labels.
In web version, in Velocity, definition of Average Velocity If you make the browser window just narrow enough so as to still not induce any rearranging of the paragraph text, then activate the zoom on the math content in the definition, then the zoom window is much too small. (Actually this happens for all math content.) 9/8/2015. Researching with MathJax user group.
In Limits and Continuity 2.2.1 Exercises, #6 The outer parentheses were omitted from (xcos(x)). Since this fundamentally changes the problem, please wait until summer to change all of the versions. 6/7/2016. Fixed in source.
Is it possible to make a universal change to put the table numbers and captions at the tops of the tables instead of the bottoms? I've always been told we put figure numbers at the bottom and table numbers at the top. My eyes always immediately look for table numbers at the top.
10/5/2015.This is already the case for the online version. For print, there is a hitch. It could be done for standalone tables, but there are many tables that are lined up alongside graphs or blocks of text. That structure currently requires captions at the bottom and then lines them all up. It would take serious structural changes to move captions for such tables. So for now the question is, would it be better to make the change anyway for the standalone tables? Then there would be inconsistency with the other kind of table.
11/3/2016. Leaving captions below for tables and figures in 2nd edition. Still agree that ideally tables would have captions at top: http://tex.stackexchange.com/questions/3243/why-should-a-table-caption-be-placed-above-the-table
In Non-existent limits, Figure 2.7.1, 2.7.2, 2.7.3 The asymptotes are missing labels 10/5/2015. I don't see this. In a physical print edition, the online print edition, and the web version (I used Firefox), I see the vertical asymptotes all there and labeled. Is this about horizontal asymptotes? They are left out intentionally since it seemed to me they are a distraction from the topic at hand. Should they be added?
6/7/2016. Marking this as resolved.
Example 3.3.2 In the Web version, the graph and the math overlap. 10/5/2015. Fixed, I think.
Example 3.3.2 Fixing the above issue for the web version will make the math extend into the margin in the print version. Will need to address this for next printing. 11/3/2016. Looks good now in print.
Figure 4.1.2 It would be helpful if the linear piece to the far right on the graph extended all the way to (if not a little past) 7 since the students need the slope to fully answer number 1 and number 2. 10/15/15.Changed in source to go all the way to 7. Can't upload now, but soon.
Throughout the Manual but often in Lab 4 The questions/text often say things like "f has been drawn in Figure x.x.x" or "Draw g' in Figure x.x.x" but then the caption for the figure reads "y=f(x)" or "y=g'(x)". I'm a fan of referring to functions by their names but I don't understand the logic of using function names in text referring to graphs but then using equations in the captions for those graphs.This seems to be a change from the old version of the Manual, where, often, the caption to a graph was simply "f" (not "y=f(x)") and then the caption for the graph jived with the text that referred to the graph.I'm not sure about what the most logical choice is but it seems to me that it might not be precise directions to tell a student to "draw f on Figure x.x.x".I mean, maybe they'd be correct to literally draw a big "f" in the middle of the graph!But if the directions were, "draw a graph of y=f(x)", it would be unambiguously clear.
12/7/15. I believe that now, no graph has something like "f" as its caption. Most have something like "y=f(x)". Some have no caption at all or something else special.
As for wording the "Draw" instructions, if you want to settle on a style choice, then identify all locations of the issue, then I can change them. But I'll need your help in identifying those locations.
For the record though, my opinion is that the function's name is its name. And when making a graph with x- and y-axes, you intrinsically then have to begin referring to "x" and "y" as well, where formerly in the surrounding text that was not necessary. So I don't see the problem. (Nor do I see a problem with rewording things.)
Antiderivatives:4.5.1
Before Exercise #9, next to figure 4.5.11, it reads, "Answer the following question..." but these directions refer to #9-16, so it should be "questions"
Also, I wonder if all of the plural "values" and "intervals" in #9-16 should be changed to "value(s)" and "interval(s)" to allow for just one such answer.
12/7/15. Changed to plural "questions", and added parentheses as noted.
"Continuity on an Interval", 2.11
I'm concerned about how the problems in "Continuity on an Interval" are laid out, especially the last problem (#11) in the problem-set. The "list" of criteria are organized like a paragraph of sentences, rather than a list, making it hard to see all of the criteria. (Students struggle enough with these problems; we need to make the list of items very clear, not buried in a paragraph.) In the other problems of this type (#9 and #10), the criteria is easier to read but I'd still rather see one single list, rather than two columns of lists since the columns are rather close together and students might think items listed next to each other are somehow related to each other.
I'm also kinda shocked at how huge the scale and labels on the graphs are: is this on purpose?! It might be good to have large grids but not necessarily ginormous scale/labels...
12/7/15. Regarding the graphs, it wasn't the scale and labels that were large; they were normal size but the web browser was magnifying the graph to fit the width. That was easy: just wrap the image in a figure. There are now additional Figures 2.11.2, 2.11.3, and 2.11.4. Since no block level enumerated object came after this in this section, this has not affected numbering.
The main practical issue is with the layout is that if it is all spelled out in one vertical column of items, then this is awkward at best for fitting with the constraints on the print edition.
I've changed them to be that way, and the tall single column of conditions luckily is not causing awkward page breaking in the print edition.
However there has been a side effect in the print edition that you may not like. With side by side objects, any captions have to go down below the side by side objects. Here, the column of conditions is so tall, that the right panel with the graph has lots of white space. Since the caption for the Figure is down low, it seems better to push the graph down low as well. And so you are left with awkward white space above the Figure.
Supplement, 2.14 Due to the previous comment, I was curious how other "draw a graph satisfying this criteria" problems looked so checked out the supplement to Lab 2 and #6 has an issue with the first column of criteria overlapping with the second column, and #7 has bullets - apparently for each item in the list - but the bullets (at least the first one) don't seem to line-up with anything, I think making it difficult to clearly see what the items are.
12/7/15. The bullets in #7 do line up, but you have to read the items carefully. There are (were) only 4 items with some groupings together within an item.
I'll change these (#6 and #7) to be as above; they will probably have the same awkward whites pace issue in the print edition though.
Constant Factor Rule, 5.3.1 This issue only occurs in the "screen-reader version" (frankly, I don't know how I ever even noticed this...).In the exercises, #4 appears in the margin (?)
12/7/15. I don't see this in the Power Rule (5.3) or the Constant Factor rule (5.4). I'm using Firefox, looking at http://spot.pcc.edu/math/clm/accessible/section-power-rule.html. If you still see this, can you say what browser, version, and OS you are using? Also, can you confirm what section you saw this in?
6/7/2016. Figured out this is about the "screen pdf". Has something to do with how multicolumn exercisegroups are handled. That will be refactored this summer. Note: look at this again later.
11/3/2016. We're not going to continue with a screen pdf. To read the document electronically, the HTML will be required. This means an active internet connection is needed to access the CSS and JS libraries. However the issues that come with a screen pdf outweigh this consideration.
Supplement, 9.8.1 The graphs given in Figures 9.8.2 and 9.8.1 don't have any scale and (unless you already know about this stuff) there's no way to know what t is or to understand why there are short line segments between (0,0) and the points.I don't know what to suggest but I think these figures are confusing... 12/7/15. Would labeling it at 2 address the issue?
Labeling at 1 will make these too cluttered given the pure pictorial point they are trying to make. There is no need to explain what t is here beyond what the surrounding text says. Steve's point is only to offer an explanation as to why the hyperbolic trig functions are called "hyperbolic".
Tangent Lines, Figure 3.2.3 The "legend" for the graph is in an opaque box that covers the "y" label on the y-axis, thus leaving the axis unlabeled.Could the box be moved or ...something so that the y-axis is labeled. 12/7/15. I slid the window right two units.
The Derivative, Example 3.3.2 In the third step in the work to find f'(x), there's a missing set of parentheses around the entire object of the limit. 10/15/15. Changed in source. Not able to upload right now, but soon.
In order to get graphs from the Lab into my weekly graded labs, I copied the PNG available in the web version into my worksheet but when the prints came back from the Print Center the graphs were faint and not particularly reader-friendly. My first thought was that it the red color of the curves was the problem but now I don't think that's it. Instead, I think it's the nature of the grid lines and the fact that I like to print my graded labs on ivory colored paper. Maybe there's nothing that can be done in the Lab but I thought I should ask about making the grid-lines a bit darker, at least in one of the many versions of each graph available in the lab.
12/7/15. Some day we can sit down together, make several graph grid options in a single tex file, print on the ivory paper, and see what works best without having an unforeseen negative impact on the other output modes. Just let me know someday when you have time.
Figure 4.3.4 The graph of a piece of g' is incorrect.I believe the concavity is incorrect and I know the point where the hole is is incorrect.If I drew a line with slope 3 through the point on g where x=-5, I would get a secant line to g.The hole should be higher, perhaps at the point (-5,4).
10/15/15. It turns out the open point on the derivative graph is at (-5, pi), not (-5, 3). We could have a totally different graph here, but it seemed like the least damage would be caused if I just labeled the coordinates of that point, which is now the case for the online version. The equation for the original curve is y=5|sin(pi/5*x)| - 2. Experiments showed that a potential future replacement could by y=-5(|sin(pi/5*x)| - 1)^2 + 3.
11/3/2016. Leaving this as indicated above with the (-5,pi) labeled ont he second graph.
Suggested schedule Would it be possible to change the suggested schedule for week 4 and week 5.I would like to move 4.4 Higher Order Derivatives from week 4 to week 5. 12/7/15. Moved.
Solutions to supplement 6.5.1 #17 The second and third line of the solutions are messed up with e^d/dsin(x) instead of simply e^sin(x). 12/7/15. Fixed.
Example 5.9.3 Final expression should be written over a common denominator. Also, maybe change the 3t^3 term in the original function to 3t^4 to avoid the domain issue you mentioned.
12/7/15. Common denominator. Don't want to change the problem right now mid-year. Put a note about the domain, but we can revisit in summer.
11/3/2016. Common denominator there. Leaving the note about the domain. (Added a similar note to the previous example.)
(Limits 2.1.1.Exercises) Caption should be y = g(t) ...not g(x) 12/7/15. Fixed.
2.2.1 directions for 4-6 (2.2 Limit Laws) ...can be replaced with its value base up on one of the replacement ... "up on" should be "upon" 12/7/15. Fixed.
after table 8.1.3 (end of introduction to related rates) Please remove the parentheses around the word about in the conclusion. 12/7/15. Fixed.
DREDS acronym definition The word "for" should be replaced with the word "four." 12/7/15. Fixed.
In example 8.2.1 The second bullet in describing the rates phrase starts with the word "drone." The word "drone" should be replaced with the word "determine." 12/7/15. Fixed.
9.3.1 Exercises, 1-6 directions The second occurrence of the word "following" should be followed by "the procedure outlined in Algorithm 9.3.1" 12/7/15. Fixed.
9.4.1 Exercises: problem 8
After the existing directions the following should be added:
Then state the local minimum and maximum points on k. Specifically address both minimum and maximum points even if one and/or the other does not exist.
12/7/15. Fixed.
Figure 9.2.2 The vertical line should not be there. 12/7/15. Fixed.
4.2.1 Exercises Can we add "onto Figure 4.2.1" to the directions in #1. I have a student confused about the two figures and that would help. It could say, "For each given value on x draw onto Figure 4.2.1 a nice long line segment at the corresponding point on g . . .
4.7.48 There is a contradiction in the instructions. First it claims that the only discontinuity is at a vertical asymptote at x=2. The last bullet however claims that the limit as x-> 4 from the left = -1 while the limit as x -> 4 from the right = 1, neither of which mesh with the second bullet point which says f(4)=3. Fix: Change the first bullet to say there are two discontinuities, one at a vert. asy. at x=2 and another a x=4.
Note from Steve. Looked up the original problem. Those limits are supposed to be about f', not f. No other changes necessary. 3/30/16. Fixed in source.
Limit Laws Exercises introduction Paragraph before problems 4-6, "Use the limit laws to establish the value... shown in exampleExample 2.2.1...." Fix the unneeded "example". 3/30/16. Fixed in source.
Limits and Continuity
I've found that there are far too many exercises in this week's lab. Other weeks we can get almost everything done, but week 2 is just jam packed even when I slim the list down substantially. I'm not suggesting a correction, but more asking if others have found this to be also especially true of this lab chapter. Perhaps additional questions could simply be moved to the supplement.
yes yes yes yes!! please! YES! That lab is WAY too long!! We could make a group project out of this but I vote to empower (and direct!) the author and his typesetter to select ~1/3 of the problems and move them to the supplement (or just delete them if there isn't interest in creating their solutions).
6/7/2016. Notified Steve to think about this.
11/3/2016. No change for edition 2.
Exercises on Antiderivatives (4.5.1)
The wording on the problem with Jasmine and her lab assistant is difficult for students. Introducing that sine is a periodic function implies to almost all of them that the graph of sine will explain the answer to the problem. Also, the part about Jasmine being half right is too vague for students. Perhaps instead, something like...
"Recall that a periodic function is one whose y-values repeat regularly.
a.) Are derivatives of any periodic function always periodic?
b.) Are antiderivatives of any periodic function always periodic?
c.) Only one of the answers to a or b should be "no". Draw an example of a function that would demonstrate how this "no" would occur."
I'm sure you can make it more clear, but I think less is more for this question.
6/11/2016. Added some clarifying wording provided by Steve.
Generally about the graphs I think the scales on the graphs are often difficult to "see". In previous discussions about the formatting of the Lab, I've commented about how I think the scale on the axes should be in a smaller font since I think the numbers, especially the negative numbers, span so much space that it's not obvious at a casual glance which spot on the axes the numbers refer to. Now I've discovered a possible explanation about why I thought the scale on the graphs was problematic: there aren't "tics" on the axes to represent the locations that the numbers represent. Scale values like "-2" span almost two full "squares" in the grid so it's (a little bit) ambiguous exactly what spot on the axis the scale-value is representing. I think there should be "tics" on the axes so that the numbers on the scale are describing the tics.
Intro to the Chain Rule section In the sin(x^2) example, the variable is labeled wrong in the last sentence, "The factor of d/dz(x^2) is called a chain rule factor." It should say d/dx(x^2). 6/7/2016. Fixed in source.
Exercises 9.4.1 #7 The derivative should be stated "k' ( x )" 6/7/2016. Fixed in source.
2.4 Limits at Infinity In the second paragraph, the second limit should say x \to -\infty 11/3/2016. Fixed.
3.4 Derivative Units "Akbar was given..." I would suggest changing it to $V'(2)$ instead of $V'(20)$, that way students are less likely to be able to argue that the tub should be empty after 20 minutes. The purpose is that they should be recognizing that the sign on the derivative is wrong. 11/4/2016. Changed it to 2.
8.1 Related Rates "When the radius of the snowball is 1.4 cm and the snowball is melting at a rate of...." It should be 0.3cm^3/min, not 0.3cm^3/L
Note to Alex: This got fixed in the pdf version very early on and it is correct in what was sent to the print center (phew!). I did note that it is still errant in the interactive version as well as the pdf "viewable" version.
12/5/2016. Fixed in the 201701 print edition, and will be fixed in the corresponding HTML. (There will no longer be a screen-formatted pdf; screen users should use the HTML version.)
9.5.1 Exercises, #2-6 intro
(Inflection points exercises)
Sorry if this is already here, but in the printed version I have today, I just noticed that after giving the y(x), y'(x), and y''(x) functions, we're giving g(t) and g'(t). As far as I can tell, g and g' are not relevant here, but were for 9.4.1 Exercises #2. I think this is a cut-and-paste error, but could be missing something.
g(t) is the function from 9.4.1 Exercise 2-6 (Sign Tables for the First Derivative). It is definitely an error.
12/5/2016. Fixed in source, and will be gone from the new HTML to be posted after 201604 ends. Unfortunately this made it into the print copies for 201701 and forward. Also there was a formatting change which makes the extra sentence more prominent.
In Section 5 Supplement, Table 5.12.5 Table lists g(x) then f' then f'' ...I think it should be g' and g'' 6/15/2017. Fixed.
Page 83 (in current print version has no page number)
6/15/2017. The page number is there. It's just that the section title is so long that it runs into the page number.
9.4.1 #3 k should be g 6/15/2017. Fixed.
Supplemental Homework Questions for Section 5, Derivative formulas in current 5.12.1 Supplemental exercises for Derivative formulas #38 in the questions says s(x)=xg(x)g(x), find s'(2). However, in the solutions it finds s'(2) for s(x)=xf(x)g(x). They should probably match. 6/15/2017. Fixed.
Supplement for implicit Differentiation, 7.4 Problem #2, the fractions in the sqrt aren't rendering properly... need more brackets. 6/15/2017. The CDN for MathJax had to be discontinued in April 2017, which led to some changes we couldn't deal with mid year. Formerly, \sfrac was supported (slanted fraction) but the MathJax we are using right now doesn't. However at the next build of the HTML, we will get \sfrac back.
4.2.1: Graphical Derivatives Minor confusion for #11-15 since they refer to functions f and f' but there are three different graphs on the same page, and all are called f or f'. #11-14 don't include a reference the figure numbers: that might help clarify. 6/15/2017. The first two plot are part of question 10. The third plot and the empty grid are for 11–15. There is a sentence introducing 11–15 that separates the plots. But to avoid confusion I will change "f" to "h" in 11–15.
4.5.1: Antiderivatives
Minor grammatical error in the "preamble" to #1 & #2 (text next to Figure 4.5.4): either "have" should be "has" since the subject is "each" (not "functions"), or the subject should be changed to a plural noun, e.g., "All of the linear functions in Figure 4.5.4 have..." 6/15/2017. Changed the "have" to "has".
7.1 Implicit Differentiation Right before Example 7.1.2, the sentence ending in "...shown in Example 7.1.2" needs a period. 6/15/2017. Fixed.
7.1.1 General Implicit Differentiation, #7
In the text next to Figure 7.1.3, there's a missing parenthesis at the end, after "...Exercise 5." 6/15/2017. Fixed.
9.5.1 exercises, #3 I think the question should ask about the critical numbers of y', not y. | CommonCrawl |
\begin{document}
\title{\Large\bf{Euler's Equation via Lagrangian Dynamics\\ with Generalized Coordinates }}
\author{Dennis S. Bernstein, University of Michigan, Ann Arbor, MI\footnote{Professor, Aerospace Engineering Department, Corresponding Author, \texttt{[email protected]}},\\
Ankit Goel, University of Maryland, Baltimore County, Baltimore, MD\footnote{Assistant Professor, Mechanical Engineering Department, \texttt{[email protected]}}, \\
Omran Kouba, Higher Institute for Applied Sciences and Technology, Damascus, Syria\footnote{Professor, Department of Mathematics, \texttt{omran\[email protected]}}} \maketitle
\begin{abstract}
Euler's equation relates the change in angular momentum of a rigid body to the applied torque. This paper fills a gap in the literature by using Lagrangian dynamics to derive Euler's equation in terms of generalized coordinates. This is done by parameterizing the angular velocity vector in terms of 3-2-1 and 3-1-3 Euler angles as well as Euler parameters, that is, unit quaternions.
\end{abstract}
\section{\large Introduction}
The rotational dynamics of a rigid spacecraft are modeled by Euler's equation \cite[p. 59]{hughes}, which relates the rate of change of the spacecraft angular momentum to the net torque.
Let $\omega\in\BBR^3$ denote the angular velocity of the spacecraft relative to an inertial frame, let $J\in\BBR^{3\times3}$ denote the inertia matrix of the spacecraft relative to its center of mass, and let $\tau$ denote the net torque applied to the spacecraft. All of these quantities are expressed in the body frame.
Applying Newton-Euler dynamics yields Euler's equation
\begin{equation}
J\dot\omega + \omega\times J\omega = \tau. \label{ee} \end{equation}
An alternative approach to obtaining the dynamics of a mechanical system is to apply Hamilton's principle in the form of Lagrangian dynamics given by
\begin{align}
\rmd_t \p_{\dot q} T - \p_q T = Q, \label{ld} \end{align}
where $T$ is the kinetic energy of the system, $q$ is the vector of generalized coordinates, and $Q$ is the vector of generalized forces arising from all external and dissipative forces and torques, including those arising from potential energy.
Here, $\rmd_t$ denotes total time derivative, and $\p_{\dot q}$ and $\p_q$ denote the partial derivatives with respect to $\dot q$ and $q,$ respectively.
For a mechanical system consisting of multiple rigid bodies, \eqref{ld} avoids the need to determine conservative contact forces, which, in the absence of dissipative contact forces, removes the need for free-body analysis.
For the case of a single rigid body, however, \eqref{ld} offers no advantage relative to a Newtonian-based derivation of Euler's equation.
Nevertheless, as an alternative derivation of \eqref{ee}, it is of interest to apply \eqref{ld} to a single rigid body.
A Lagrangian-based derivation of Euler's equation is given in \cite[p 281]{taeyoung} using Lagrangian dynamics on Lie groups.
The goal of the present note is to provide an elementary derivation by using generalized coordinates.
In particular, Euler angles are considered.
Among all possible sequences consisting of three Euler-angle rotations, there are six that have three distinct axes and 6 that have the same first and last axes, for a total of 12 distinct sequences \cite[p. 764]{wertz}.
Relabeling axes allows us to consider two representative sequences, namely, 3-2-1 (azimuth-elevation-bank) and 3-1-3 (precession-nutation-spin).
These choices are commonly used for aircraft and spacecraft, respectively.
As a further illustration, Euler parameters (quaternions) are also considered.
The novelty of the present article is a compact derivation of Euler's equation using generalized coordinates.
Some elements of the derivation are known; for example, a connection is made between Proposition 1 and equation (A24) of \cite{meirovitch}.
However, a comparable derivation of Euler's equation using generalized coordinates does not appear to be available.
Notation: For $x,y\in\BBR^3,$ $x\times y$ denotes the cross product of $x$ and $y,$ $x^\times$ denotes the cross-product matrix (so that $x^\times y = x\times y$), $I_3$ denotes the $3\times 3$ identity matrix, and $A^\rmT\in\BBR^{l\times k}$ denotes the transpose of $A\in\BBR^{k\times l}.$
\section{\large Preliminary Results}
For a single rigid body, let $q = [q_1\ q_2\ q_3]^\rmT\in\BBR^3$ denote generalized coordinates, and assume that the angular velocity $\omega$ can be parameterized as
\begin{equation}
\omega(q,\dot q) = S(q)\dot q, \label{omegaS} \end{equation}
where $S(q)\in\BBR^{3\times 3}.$
Assuming that the net force is zero and thus the center of mass of the spacecraft has zero inertial acceleration, it follows that
\begin{align}
T(q,\dot q) &= \half \omega(q,\dot q)^\rmT J\omega(q,\dot q)\nn\\
&= \half \dot q^\rmT S(q)^\rmT J S(q) \dot q, \end{align}
and thus
\begin{gather}
\p_{\dot q}T(q,\dot q) = S(q)^\rmT J S(q)\dot q, \label{pdotT}\\
\rmd_t\p_{\dot q}T(q,\dot q) = S(q)^\rmT J S(q)\ddot q
+ S(q)^\rmT J \dot S(q)\dot q + \dot S(q)^\rmT J S(q)\dot q, \label{dtpdotT}\\
\p_{q}T(q,\dot q)
= \matl \dot q^\rmT [\p_{q_1}S(q)]^\rmT J S(q)\dot q\\
\dot q^\rmT [\p_{q_2}S(q)]^\rmT J S(q)\dot q\\
\dot q^\rmT [\p_{q_3}S(q)]^\rmT J S(q)\dot q\matr \label{pqdotT}.
\end{gather}
Furthermore, it follows from \cite[8.10.6]{baruh} that
\begin{equation}
Q = S(q)^\rmT \tau. \label{QStau} \end{equation}
Now, combining \eqref{dtpdotT}, \eqref{pqdotT}, \eqref{QStau} with \eqref{ld} yields
\begin{align}
S(q)^\rmT J S(q)\ddot q
+ S(q)^\rmT J \dot S(q)\dot q + \dot S(q)^\rmT J S(q)\dot q
- \matl \dot q^\rmT [\p_{q_1}S(q)]^\rmT J S(q)\dot q\\
\dot q^\rmT [\p_{q_2}S(q)]^\rmT J S(q)\dot q\\
\dot q^\rmT [\p_{q_3}S(q)]^\rmT J S(q)\dot q\matr = S(q)^\rmT \tau. \label{ld2} \end{align}
If $S(q)$ is nonsingular, then
\begin{align}
J S(q)\ddot q
+ J \dot S(q)\dot q + S(q)^{-\rmT}\dot S(q)^\rmT J S(q)\dot q
- S(q)^{-\rmT}\matl \dot q^\rmT [\p_{q_1}S(q)]^\rmT J S(q)]\dot q\\
\dot q^\rmT [\p_{q_2}S(q)]^\rmT J S(q)\dot q\\
\dot q^\rmT [\p_{q_3}S(q)]^\rmT J S(q)\dot q\matr = \tau, \label{ld4} \end{align}
which can be viewed as Euler's equation expressed in terms of arbitrary generalized coordinates.
Next, noting that
\begin{equation}
\dot \omega(q,\dot q) = S(q)\ddot q + \dot S(q)\dot q, \end{equation}
\eqref{ld4} can be written as
\begin{align}
J \dot \omega(q,\dot q)
+ S(q)^{-\rmT}\left(\dot S(q)^\rmT J S(q)\dot q
- \matl \dot q^\rmT [\p_{q_1}S(q)]^\rmT J S(q)]\dot q\\
\dot q^\rmT [\p_{q_2}S(q)]^\rmT J S(q)\dot q\\
\dot q^\rmT [\p_{q_3}S(q)]^\rmT J S(q)\dot q\matr\right) = \tau. \label{ld5} \end{align}
Comparing \eqref{ld5} with Euler's equation \eqref{ee} written in terms of the angular velocity implies
\begin{equation}
S(q)^{-\rmT}\left(\dot S(q)^\rmT J S(q)\dot q
- \matl \dot q^\rmT [\p_{q_1}S(q)]^\rmT J S(q)\dot q\\
\dot q^\rmT [\p_{q_2}S(q)]^\rmT J S(q)\dot q\\
\dot q^\rmT [\p_{q_3}S(q)]^\rmT J S(q)\dot q\matr\right)
= \omega(q,\dot q)\times J\omega(q,\dot q). \label{eqnwanted}
\end{equation}
Our objective is to verify this identity for rotations parameterized by Euler angles and Euler parameters.
For the following result, the columns of $S(q)$ are denoted by $S_1(q),$ $S_2(q),$ and $S_3(q)$ so that
\begin{equation}
S(q) = [S_1(q) \ \ S_2(q) \ \ S_3(q)]. \end{equation}
We note that {\bf(a)} is given by equation (A24) of \cite{meirovitch}.
{\bf Proposition 1.} Define $S$ by \eqref{omegaS}. Then, the following properties are equivalent:
{\bf(a)} For all $q$ and $\dot{q}$,
\begin{equation}
\dot S(q) + [S(q)\dot q]^\times S(q)
=
[ \p_{q_1}S(q)\dot q \ \ \
\p_{q_2}S(q)\dot q \ \ \
\p_{q_3}S(q)\dot q ]. \label{identJ}
\end{equation}
{\bf(b)} For all $q$ and $\dot{q}$,
\begin{align}
\sum_{i=1}^3 \dot q_i\p_{q_i} S(q)
&+ [ S_2(q)\times S_3(q) \ \ \ S_3(q)\times S_1(q) \ \ \ S_1(q)\times S_2(q) ] \dot q^\times\nn\\
&= [ \p_{q_1} S(q)\dot q \ \ \ \p_{q_2} S(q)\dot q \ \ \ \p_{q_3} S(q)\dot q].
\label{identnoJ} \end{align}
{\bf (c)} For all $q$,
\begin{align}
\p_{q_2}S_1(q)-\p_{q_1}S_2(q)&=S_1(q)\times S_2(q),\label{identnodotq1}\\
\p_{q_3}S_1(q)-\p_{q_1}S_3(q)&=S_1(q)\times S_3(q),\label{identnodotq2}\\
\p_{q_3}S_2(q)-\p_{q_2}S_3(q)&=S_2(q)\times S_3(q).\label{identnodotq3}
\end{align}
Now, assume that $S(q)$ is nonsingular. Then (a)--(c) are equivalent to \begin{equation}\label{identshort} S(q)^\rmT [\p_{q_3}S_2(q)-\p_{q_2}S_3(q)\ \ \
\p_{q_1}S_3(q)-\p_{q_3}S_1(q) \ \ \
\p_{q_2}S_1(q)-\p_{q_1}S_2(q)]=\det(S(q))I_3. \end{equation}
The following lemmas are needed.
{\bf Lemma 1.} Let $x\in\BBR^3$ and $A\in\BBR^{3 \times 3}.$
Then,
\begin{align}
A^\rmT (Ax)^\times A = (\det A)x^\times. \label{ATAxA} \end{align}
{\bf Proof.} See {\it xxxix}) of Fact 4.12.1 in \cite[p. 385]{bernstein2018}.
$\square$
{\bf Lemma 2.} Let $A = [A_1\ \ A_2 \ \ A_3]\in\BBR^{3 \times 3}.$
Then
\begin{equation}
A^\rmT[A_2\times A_3\ \ \ A_3\times A_1\ \ \ A_1\times A_2]
= (\det A)I_3. \label{ATA2I3} \end{equation}
Now, let $x\in\BBR^3.$ Then,
\begin{equation}
[A_2\times A_3\ \ \ A_3\times A_1\ \ \ A_1\times A_2]x^\times
= (Ax)^\times A. \label{A2A3AxA} \end{equation}
{\bf Proof.} See {\it xlii}) of Fact 4.12.1 in \cite[p. 385]{bernstein2018}.
In the case where $A$ is nonsingular, the second statement follows from \eqref{ATAxA} and \eqref{ATA2I3}.
In the case where $A$ is singular, the conclusion follows by continuity since both sides of \eqref{A2A3AxA} are continuous functions of the columns $(A_1,A_2,A_3)$ of $A$ and the set of nonsingular matrices is dense in $\BBR^{3\times3}$.
$\square$
{\bf Proof of Proposition 1.}
Note that
\begin{equation}
\dot S(q) = \sum_{i=1}^3\dot q_i\p_{q_i} S(q). \label{pf1} \end{equation}
Furthermore, it follows from \eqref{A2A3AxA} that
\begin{align}
[S(q)\dot q]^\times S(q)
&= [ S_2(q)\times S_3(q) \ \ \ S_3(q)\times S_1(q) \ \ \ S_1(q)\times S_2(q) ] \dot q^\times. \label{pf2} \end{align}
Therefore, \eqref{pf1} and \eqref{pf2} imply that {\bf(a)} and {\bf(b)} are equivalent.
To prove that {\bf(b)} and {\bf(c)} are equivalent, note that {\bf(b)} is equivalent to $L(\dot{q})=R(\dot{q})$ for all $\dot{q}\in\BBR^3$, where $L$ and $R$ are the linear operators defined for all $x=[x_1\ \ x_2\ \ x_3]^\rmT\in\BBR^3$ by \begin{align} L(x)&= [ \p_{q_1} S(q)x \ \ \ \p_{q_2} S(q)x \ \ \ \p_{q_3} S(q)x]- \sum_{i=1}^3 x_i\p_{q_i} S(q), \\
R(x)&= [ S_2(q)\times S_3(q) \ \ \ S_3(q)\times S_1(q) \ \ \ S_1(q)\times S_2(q) ] x^\times.
\end{align}
Since $R$ and $L$ are linear, it follows that $L(\dot{q})=R(\dot{q})$ for all $\dot{q}\in\BBR^3$ if and only if \begin{equation} L(e_i)=R(e_i),\quad\textrm{ for all $i=1,2,3$,}\label{eqLR} \end{equation} where ${e_1=[1\ \ 0\ \ 0]^\rmT}$, ${e_2=[0\ \ 1\ \ 0]^\rmT}$, and ${e_3=[0\ \ 0\ \ 1]^\rmT}$ because $(e_1,e_2,e_3)$ is a basis of $\BBR^3$.
Next, note that
\begin{align}
L(e_1)&= [ \p_{q_1} S_1(q) \ \ \p_{q_2} S_1(q) \ \ \p_{q_3} S_1(q)]-
[ \p_{q_1} S_1(q) \ \ \p_{q_1} S_2(q) \ \ \p_{q_1} S_3(q)]\nn\\
&= [0 \ \ \ \p_{q_2} S_1(q)-\p_{q_1} S_2(q) \ \ \ \p_{q_3} S_1(q)-\p_{q_1} S_3(q)],\label{L1}\\
L(e_2)&= [\p_{q_1} S_2(q)-\p_{q_2} S_1(q) \ \ \ 0 \ \ \ \p_{q_3} S_2(q)-\p_{q_2} S_3(q)],\label{L2}\\
L(e_3)&= [\p_{q_1} S_3(q)-\p_{q_3} S_1(q) \ \ \ \p_{q_2} S_3(q)-\p_{q_3} S_2(q) \ \ \ 0],\label{L3}
\end{align} and
\begin{align}
R(e_1)&=[ S_2(q)\times S_3(q) \ \ \ S_3(q)\times S_1(q) \ \ \ S_1(q)\times S_2(q) ][0\ \ \ e_3 \ \ -e_2],\nn\\
&=[0\ \ S_1(q)\times S_2(q) \ \ S_1(q)\times S_3(q)],\label{R1}\\
R(e_2)&=[S_2(q)\times S_1(q) \ \ 0\ \ S_2(q)\times S_3(q)],\label{R2}\\
R(e_3)&=[S_3(q)\times S_1(q) \ \ \ S_3(q)\times S_2(q)\ \ \ 0].\label{R3}
\end{align}
Comparing \eqref{L1}--\eqref{L3} with \eqref{R1}--\eqref{R2} shows that \eqref{eqLR} is equivalent to {\bf(c)}. Finally \eqref{identshort} follows from \eqref{identnodotq1}--\eqref{identnodotq3} and \eqref{ATAxA}.
$\square$
To demonstrate the relevance of \eqref{identJ} to \eqref{eqnwanted}, note that transposing and rearranging \eqref{identJ} yields
\begin{equation}
\dot S(q)^\rmT - \matl \dot q^\rmT [\p_{q_1}S(q)]^\rmT \\
\dot q^\rmT [\p_{q_2}S(q)]^\rmT \\
\dot q^\rmT [\p_{q_3}S(q)]^\rmT \matr
= S(q)^\rmT[S(q)\dot q]^\times,
\label{identJ3}
\end{equation}
and thus, assuming that $S(q)$ is nonsingular,
\begin{equation}
S(q)^{-\rmT}\left(\dot S(q)^\rmT - \matl \dot q^\rmT [\p_{q_1}S(q)]^\rmT \\
\dot q^\rmT [\p_{q_2}S(q)]^\rmT \\
\dot q^\rmT [\p_{q_3}S(q)]^\rmT \matr\right)
= [S(q)\dot q]^\times.
\label{identJ4}
\end{equation}
Finally, multiplying \eqref{identJ4} on the right by $JS(q)\dot q$ yields
\begin{equation}
S(q)^{-\rmT}\left(\dot S(q)^\rmT J S(q)\dot q
- \matl \dot q^\rmT [\p_{q_1}S(q)]^\rmT J S(q)\dot q\\
\dot q^\rmT [\p_{q_2}S(q)]^\rmT J S(q)\dot q\\
\dot q^\rmT [\p_{q_3}S(q)]^\rmT J S(q)\dot q\matr\right)
= \omega(q,\dot q)\times J\omega(q,\dot q),
\end{equation}
which is precisely \eqref{eqnwanted}.
For a given choice of $q$, it easier to verify \eqref{identnodotq1}--\eqref{identnodotq3} than \eqref{identJ} or \eqref{identnoJ}.
In the next three sections, \eqref{identnodotq1}--\eqref{identnodotq3} are verified for 3-2-1 and 3-1-3 Euler angles as well as Euler parameters (quaternions).
\section{\large Verification of \eqref{identnodotq1}--\eqref{identnodotq3} for 3-2-1 Euler Angles}
Letting $(\Psi,\Theta,\Phi)$ denote 3-2-1 (azimuth-elevation-bank) Euler angles, it follows that
\begin{equation} \omega(q,\dot q) = S(\Phi,\Theta)\dot q, \label{omegavectrixres} \end{equation}
where
\begin{align}
S(\Phi,\Theta)
&=[ S_1 \ \ \ S_2(\Phi) \ \ \ S_3(\Phi,\Theta)]\\
&=\matl
1&0&-\sin\Theta \\
0&\cos\Phi &(\sin\Phi)\cos\Theta\\
0&-\sin\Phi &(\cos\Phi)\cos\Theta \matr, \label{Scols}
\end{align}
\begin{equation}
q =\matl q_1 \\ q_2 \\ q_3\matr\triangleq \matl \Phi \\ \Theta \\ \Psi\matr. \end{equation}
Note that $\det S(\Phi,\Theta) = \cos\Theta,$ and thus $S(\Phi,\Theta)$ is singular if and only if gimbal lock occurs.
Hence,
\begin{align}
\p_{q_2}S_1(q)-\p_{q_1}S_2(q)&= -\p_{\Phi}S_2(\Phi)=\matl 0\\ \sin\Phi \\ \cos\Phi\matr= S_1\times S_2(\Phi),\nn\\
\p_{q_3}S_1(q)-\p_{q_1}S_3(q)&=-\p_{\Phi}S_3(\Phi,\Theta)
=\matl 0\\ -\cos\Phi\,\cos\Theta\\ \sin\Phi \,\cos\Theta\matr=
S_1\times S_3(\Phi,\Theta),\\
\p_{q_3}S_2(q)-\p_{q_2}S_3(q)&=-\p_{\Theta}S_3(\Phi,\Theta)=
\matl \cos\Theta\\ \sin\Phi\,\sin\Theta\\ \cos\Phi \,\sin\Theta\matr=
S_2(\Phi)\times S_3(\Phi,\Theta).\nn \end{align} Hence, \eqref{identnodotq1}--\eqref{identnodotq3} hold, and thus \eqref{identJ} and \eqref{identnoJ} are verified.
\section{\large Verification of \eqref{identnodotq1}--\eqref{identnodotq3} for 3-1-3 Euler Angles}
Letting $(\Phi,\Theta,\Psi)$ denote 3-1-3 (precession-nutation-spin) Euler angles, it follows that
\begin{equation} \omega(q,\dot q) = S(\Psi,\Theta)\dot q, \label{omegavectrixres313} \end{equation}
where
\begin{align} S(\Psi,\Theta)
&= [ S_1 \ \ \ S_2(\Psi) \ \ \ S_3(\Psi,\Theta)]\\
&= \matl
0&\cos\Psi & (\sin\Psi) \sin\Theta\\
0&-\sin\Psi &(\cos\Psi)\sin\Theta \\
1&0&\cos\Theta \matr,
\end{align}
\begin{equation}
q=\matl q_1\\ q_2\\ q_3\matr\triangleq\matl
\Psi \\
\Theta \\
\Phi \matr. \end{equation}
Note that $\det S(\Psi,\Theta) = \sin\Theta,$ and thus $S(\Psi,\Theta)$ is singular if and only if gimbal lock occurs.
Hence,
\begin{align}
\p_{q_2}S_1(q)-\p_{q_1}S_2(q)&= -\p_{\Psi}S_2(\Psi)=\matl \sin \Psi\\ \cos\Psi \\ 0\matr= S_1\times S_2(\Psi),\nn\\
\p_{q_3}S_1(q)-\p_{q_1}S_3(q)&=-\p_{\Psi}S_3(\Psi,\Theta)
=\matl -\cos\Psi\,\sin \Theta,\\ \sin\Psi \,\sin\Theta \\ 0\matr=
S_1\times S_3(\Phi,\Theta),\\
\p_{q_3}S_2(q)-\p_{q_2}S_3(q)&=-\p_{\Theta}S_3(\Psi,\Theta)=
\matl -\sin\Psi\,\cos\Theta\\ -\cos\Psi\,\cos\Theta\\ \sin\Theta\matr=
S_2(\Psi)\times S_3(\Psi,\Theta).\nn \end{align} Hence, \eqref{identnodotq1}--\eqref{identnodotq3} hold, and thus \eqref{identJ} and \eqref{identnoJ} are verified.
\section{Verification of \eqref{identshort} for Euler Parameters}
To avoid gimbal lack, an alternative approach is to use Euler parameters (quaternions).
In this case,
\begin{align}
\tilde q = \matl q_1\\ q_2\\ q_3\\ q_4\matr = \matl \cos \half\theta\\[1ex] (\sin\half\theta)n\matr, \label{eulerparvecgencoord}
\end{align}
where $\theta\in(-\pi,\pi]$ is the eigenangle and $n\in\BBR^3$ is the unit eigenaxis.
Since $q_1^2+q_2^2+q_3^2+q_4^2=1,$ it follows that $q_1 = \sqrt{1 - q_2^2-q_3^2-q_4^2},$ and thus the generalized coordinates are
$q = [q_2\ \ q_3\ \ q_4]^\rmT.$
With this notation, assuming that $\theta\ne\pi$ and thus $q_1>0,$ it follows that \eqref{omegaS} holds with
\begin{align}
S(q) = 2\matl
q_1+\dfrac{q_2^2}{q_1} &
q_4+\dfrac{q_2q_3}{q_1} &
-q_3+\dfrac{q_2q_4}{q_1}\\[2ex]
-q_4+\dfrac{q_2q_3}{q_1} &
q_1+\dfrac{q_3^2}{q_1} &
q_2+\dfrac{q_3q_4}{q_1}\\[2ex]
q_3+\dfrac{q_2q_4}{q_1} &
-q_2+\dfrac{q_3q_4}{q_1} &
q_1+\dfrac{q_4^2}{q_1}\matr.
\end{align}
Next, note that, for all $i=2,3,4,$ $\p_{q_i}q_1=-q_i/q_1$. Thus,
\begin{align}
\p_{q_3}S_1(q)-\p_{q_2}S_2(q)&=2\matl
-\dfrac{q_3}{q_1}+\dfrac{q_2^2q_3}{q_1^3}-\dfrac{q_3}{q_1}-\dfrac{q_2^2q_3}{q_1^3}\\[2ex]
\dfrac{q_2}{q_1}+\dfrac{q_2q_3^2}{q_1^3}+\dfrac{q_2}{q_1}-\dfrac{q_3^2q_2}{q_1^3}\\[2ex]
1+\dfrac{q_2q_3q_4}{q_1^3}+1-\dfrac{q_2q_3q_4}{q_1^3}
\matr=\frac{4}{q_1}\matl
-q_3\\
q_2\\
q_1
\matr,\\
\p_{q_4}S_1(q)-\p_{q_2}S_3(q)&=2\matl -\dfrac{q_4}{q_1}+\dfrac{q_2^2q_4}{q_1^3}-\dfrac{q_4}{q_1}-\dfrac{q_2^2q_4}{q_1^3}\\[2ex]
-1+\dfrac{q_2q_3q_4}{q_1^3}-1-\dfrac{q_2q_3q_4}{q_1^3}\\[2ex]
\dfrac{q_2}{q_1}+\dfrac{q_2q_4^2}{q_1^3}+\dfrac{q_2}{q_1}-\dfrac{q_2q_4^2}{q_1^3}
\matr=\frac{4}{q_1}\matl
-q_4\\
-q_1\\
q_2
\matr,\\
\p_{q_4}S_2(q)-\p_{q_3}S_3(q)&=2\matl
1+\dfrac{q_2q_3q_4}{q_1^3}+1-\dfrac{q_2q_3q_4}{q_1^3}\\[2ex]
-\dfrac{q_4}{q_1}+\dfrac{q_3^2q_4}{q_1^3}-\dfrac{q_4}{q_1}-\dfrac{q_3^2q_4}{q_1^3}\\[2ex]
\dfrac{q_3}{q_1}+\dfrac{q_3q_4^2}{q_1^3}+\dfrac{q_3}{q_1}-\dfrac{q_3q_4^2}{q_1^3}
\matr=\frac{4}{q_1}\matl
q_1\\
-q_4\\
q_3
\matr. \end{align} Thus, \begin{align}
M(q)&\triangleq[\p_{q_4}S_2(q)-\p_{q_3}S_3(q)\ \
\p_{q_2}S_3(q)-\p_{q_4}S_1(q)\ \
\p_{q_3}S_1(q)-\p_{q_2}S_2(q)]\nn\\
&=
\frac{4}{q_1}\matl
q_1&q_4&-q_3\\ -q_4&q_1&q_2 \\q_3&-q_2&q_1 \matr, \end{align} and using $q_1^2+q_2^2+q_3^2+q_4^2=1$ yields \begin{align}
M(q)^\rmT S(q)&=
\frac{8}{q_1}\matl
q_1&-q_4&q_3\\ q_4&q_1&-q_2 \\-q_3&q_2&q_1 \matr \matl q_1+\dfrac{q_2^2}{q_1} & q_4+\dfrac{q_2q_3}{q_1} & -q_3+\dfrac{q_2q_4}{q_1}\\[2ex] -q_4+\dfrac{q_2q_3}{q_1} & q_1+\dfrac{q_3^2}{q_1} & q_2+\dfrac{q_3q_4}{q_1}\\[2ex] q_3+\dfrac{q_2q_4}{q_1} & -q_2+\dfrac{q_3q_4}{q_1} & q_1+\dfrac{q_4^2}{q_1}\matr \nn\\ &=\frac{8}{q_1}\matl 1&0&0\\0&1&0\\0&0&1\matr. \label{eqMS} \end{align} Since $\det(M(q))=64/q_1^2$, \eqref{eqMS} implies that $\det(S(q))=8/q_1$. Thus, $S(q)$ is nonsingular and satisfies \eqref{identshort}.
Consequently, \eqref{identJ} and \eqref{identnoJ} are verified for Euler parameters.
\section*{Acknowledgments}
The authors are grateful to one of the reviewers for bringing \cite{meirovitch} to our attention and independently confirming {\bf (c)} of Proposition 1.
\section{Conclusions}
We used Lagrangian dynamics to derive Euler's equation using Euler angles and Euler parameters (quaternions) as generalized coordinates.
Although the strength of Lagrangian dynamics lies in its ability to avoid free-body analysis in the presence of conservative reaction forces, this derivation illustrates the connection between Lagrangian dynamics and the dynamics of a single unconstrained rigid body.
A more advanced approach is to apply Lagrangian dynamics on Lie groups as presented in \cite{taeyoung}.
\end{document} | arXiv |
Localic completion of generalized metric spaces I
Steven Vickers
Following Lawvere, a generalized metric space (gms) is a set $X$ equipped with a metric map from $X^{2}$ to the interval of upper reals (approximated from above but not from below) from 0 to $\infty$ inclusive, and satisfying the zero self-distance law and the triangle inequality.
We describe a completion of gms's by Cauchy filters of formal balls. In terms of Lawvere's approach using categories enriched over $[0,\infty]$, the Cauchy filters are equivalent to flat left modules.
The completion generalizes the usual one for metric spaces. For quasimetrics it is equivalent to the Yoneda completion in its netwise form due to Kunzi and Schellekens and thereby gives a new and explicit characterization of the points of the Yoneda completion.
Non-expansive functions between gms's lift to continuous maps between the completions.
Various examples and constructions are given, including finite products.
The completion is easily adapted to produce a locale, and that part of the work is constructively valid. The exposition illustrates the use of geometric logic to enable point-based reasoning for locales.
Keywords: topology, locale, geometric logic, metric, quasimetric, completion, enriched category
2000 MSC: primary 54E50; secondary 26E40, 06D22, 18D20, 03G30
http://www.tac.mta.ca/tac/volumes/14/15/14-15.dvi
http://www.tac.mta.ca/tac/volumes/14/15/14-15.ps
ftp://ftp.tac.mta.ca/pub/tac/html/volumes/14/15/14-15.dvi
ftp://ftp.tac.mta.ca/pub/tac/html/volumes/14/15/14-15.ps | CommonCrawl |
The Steady State: A Key Description of Biology
The steady state - when populations, concentrations and spatial distributions are unchanging in time - is one of the most important physical concepts for understanding cell biology. This is not to say that cells are generally in steady states: after all, the cell cycle is a never-ending repeated sequence of changes of from one stage of life to another. In many cases, however, a steady state is a reasonable approximation to a short (enough) window of time in a cellular process, perhaps in a localized region of interest. Think "homeostasis" (but beware of biologists' informal use of the word "equilibrium" which we clarify below). Even if/when a steady state does not hold even approximately, it is still an essential conceptual reference point. You absolutely must understand it.
Two examples of (potential) steady states are sketched above. A table-top waterfall will be in a steady state as water continuously is pumped up from the lower reservoir - to which it returns by gravity. A complex chemical cycle, meant to evoke the citric acid cycle, takes several types of molecules as inputs and catalytically changes them into different output molecules. Note that both examples require the input of energy and/or matter. And neither will be in a steady state if the inputs are removed - or in the transient period after the system is initiated. Other examples, observed over suitable time windows, include motion by motor proteins (which requires constant input of ATP) and active transport (which requires a driving electro/chemical gradient or ATP). Below, we'll look more closely at a Michaelis-Menten process used to model catalytic and biosynthetic processes.
Schematically, a steady state consists of one or more inputs and one or more outputs, with each component unchanging in time.
A more typical (and complex) case includes multiple inputs/outputs and an internal cycle
Let's define a steady state more precisely. We'll restrict ourselves to considering a dsicrete set of states (e.g., chemical and conformational states, possibly of many different types of molecules) simply numbered $i = 1, 2, \ldots$ or $j = 1, 2, \ldots$ with concentrations $[i]$ or $[j]$ which interconvert according to first-order rate constants $k_{ij}$. Then the steady-state condition that every concentration be unchanging in time will be satisfied if, and only if, the flow into a state (i.e., the number of molecules changing into that state) exactly balances the flow out of a state (conversions to other states):
Since the concentrations are steady (unchanging) in time, the time derivative of every derivative will be zero, as we explore further below. Note that spatial derivatives (gradients) need not vanish in a steady state: in a (hypothetically perfectly steady) cell, a molecule could be produced in one region and consumed in another.
Equilibrium is a special steady state
What's the difference between a steady state and equilibrium? That's a little bit of a trick question because equilibrium is a steady state! Equilibrium is a very special steady state, however, in which the condition (2) is satisfied in a special way - namely, by the stricter condition of detailed balance,
That is, the number of transitions per second from $i$ to $j$ is exactly balanced by reverse transitions. Because this holds for every pair of states in equilibrium, it is said to hold "in detail." Intuitively, it should be clear that if state $i$ experiences equal in and out flow with every other state, then its population/concentration cannot change, and so condition (2) will be satisfied.
Non-equilibrium steady states require inputs and outputs
In the sketch of the waterfall at the top, I was careful to include the power cord because non-equilibrium steady states (when there are net flows through a system) require input of energy implicitly or explicitly. Steady states are not self-sustaining. Note that the input and removal of matter from a system implies the use of energy to enact those processes. Without the input of energy (and/or material) a system will relax to equilibrium, and biophysicists sometimes like to say, "Equilibrium equals death." Equilibrium's condition of detailed balance (3), which also holds for flow in real space, means there is no net flow of matter or chemical processes, which are absolutely required for life. What a cell does, after all, is to orchestrate a complex series of directed processes: signals travel from cell surface to nucleus; genes are transcribed and translated. If these processes don't overwhelmingly proceed in a single direction, the cell just won't work. In fact, if you think about it, a lot the cell's use of activated carriers like ATP goes toward maintaining the directionality of signaling processes - e.g., via phosphorylation - rather than doing work per se.
Steady-state analysis of a Michaelis-Menten (MM) process
A standard MM process models conversion of a substrate (S) to a product (P), catalyzed by an enzyme (E) after formation of a bound-but-uncatalyzed complex (ES).
The simple MM model can also be viewed as a cycle because the enzyme E is re-used. Blue arrows indicate steady net flows.
(The standard MM process here can be contrasted with the corrected MM cycle that allows for reverse events and physical single-step processes.)
A steady state will occur if P is removed at the same rate as S is added. Mathematically, for steady state, we set the time derivative of the ES complex to zero.
The result yields what looks like a dissociation constant in terms of the steady-state (SS) concentrations:
In words, in the steady state, the ratio of concentrations on the left assumes the constant value given by the particular ratio of rate constants in the middle. The effective "equilibrium" constant $K_M$ is conventionally defined but not strictly needed.
The basic steady state result (5) can be used to calculate other quantities of interest, such as the overall rate of product production
now given in terms of the steady-state E and S concentrations, which should be known.
The standard MM model is unphysical
All molecular processes are reversible, so any model with a uni-directional arrow is necesarily approximate: see the discussion of cycles. The full MM cycle, allowing for reverse events and permitting only single-step processes, is subjected to a (more complicated) steady-state analysis in an advanced section.
‹ Advanced Cycle Logic up Thermodynamic Connection Between Free Energy and Work › | CommonCrawl |
s block Elements Class 11 Important Questions with Answers
Get s-block elements important questions with detailed answers for class 11 exams preparation. View the Important Question bank for Class 11 and class 12 Chemistry complete syllabus. These important questions and answers will play significant role in clearing concepts of Chemistry class 11. This question bank is designed keeping NCERT in mind and the questions are updated with respect to upcoming Board exams. You will get here all the important questions for class 11 chemistry chapters. Learn the concepts of s block elements and other topics of chemistry class 11 and 12 syllabus with these important questions and answers along with the notes developed by experienced faculty. Click Here for Detailed Chapter-wise Notes of Chemistry for Class 11th, JEE & NEET. You can access free study material for all three subject's Physics, Chemistry and Mathematics. Click Here for Detailed Notes of any chapter. eSaral provides you complete edge to prepare for Board and Competitive Exams like JEE, NEET, BITSAT, etc. We have transformed classroom in such a way that a student can study anytime anywhere. With the help of AI we have made the learning Personalized, adaptive and accessible for each and every one. Visit eSaral Website to download or view free study material for JEE & NEET. Also get to know about the strategies to Crack Exam in limited time period. Very Short Answer (1 Mark)
Q.. Why first group elements are called alkali metals ?
Ans. Group I elements are highly reactive and with water (moisture in the atmosphere) form strong alkalies, so they are called alkali metals.
Q.. Write the chemical name and formula of washing soda.
Ans. Washing soda is sodium carbonate $\left(N a_{2} C O_{3}\right)$.
Q.. Why do alkali metals not occur in free state ?
Ans. They are highly reactive, therefore, they occur in combined state and do not occur in free state.
Q.. Write the important minerals of lithium.
Ans. The important minerals of lithium are:
(i) Lipidolite $(L i . N a . K)_{2} A l_{2}\left(S i O_{3}\right)_{3} \cdot F(O H)$ (ii) Spodumene $L i A I S i_{2} O_{3}$ (iii) Amblygonite $\operatorname{LiAl}\left(P O_{4}\right) F$
Q.. How sodium hydroxide is prepared at large scale ?
Ans. At large scale, sodium hydroxide is prepared by castner kellner cell.
Q.. Why $\mathrm{KHCO}_{3}$ is not prepared by Solvay process ?
Ans.Because solubility of $\mathrm{KHCO}_{3}$ is fairly large as compared to $\mathrm{NaHCO}_{3}$.
Q.. What is chemical composition of the Plaster of Paris ?
Ans. Plaster of paris is calcium sulphate hemihydrate : $\mathrm{CaSO}_{4} \cdot \frac{1}{2} \mathrm{H}_{2} \mathrm{O}$ or $\left(\mathrm{CaSO}_{4}\right)_{2} \cdot \mathrm{H}_{2} \mathrm{O}$
Q.. Why do alkali metals have low density ?
Ans. Due to weak metallic bonds and large atomic size, their density is low.
Q.. Why is first ionization energy of alkali metals lower than those of alkaline earth metals ?
Ans. Alkali metals have bigger atomic size, therefore they have lower first I.E. than group 2 elements.
Q.. First group elements are strong reducing agents, why ?
Ans. Because they have a strong tendency to lose outer most electron.
Q.. Explain why ? LiI is more soluble than KI in ethanol. [NCERT]
Ans. $K I$ In the chemical bond is ionic in character. On the other hand due to small size of lithium ion and its high polarising power the bond in is predominently covalent in character. Hence LiI is more soluble than in ethanol.
Q.. LiH is more stable than $\mathrm{NaH}$ Explain.
Ans. Both $L i^{+}$ and $H^{-}$ have small size and their combination has high lattice energy. Therefore LiH is stable as compared with $\mathrm{NaH}$
Q.. Why is $B e C l_{2}$ soluble in organic solvents ?
Ans. $B e C l_{2}$ is covalent, therefore soluble in organic salvents.
Q.. Name the metal which floats on water without any apparent reaction with water. [NCERT]
Ans. Lithium floats on water without any apparent reaction with it.
Q.. Name an element which is invariably bivalent acid and whose oxide is soluble in excess of NaOH and its dipositive ion has a noble gas core.
Ans. The element is beryllium, its oxide $B e O$ is soluble in excess of $\mathrm{NaOH}$ $B e O+2 N a O H \longrightarrow N a_{2} B e O_{2}+H_{2} O$ Its dipositive ion has electronic configuration $\left(B e^{2+}=1 s^{2}\right)$
Q.. State reason for the high solubility of $B e C l_{2}$ in organic solvents.
Ans. Because $B e C l_{2}$ is covalent compound.
Q.. What is the cause of diagonal relationship ? [NCERT]
Ans. The charge over radius ratio, i.e., polarizing power is similar, that is the cause of diagonal relationship.
Q.. Name the alkali metals which forms superoxide when heated in excess of air.
Ans. Potassium, Rubidium and caesium form superoxide when heated in excess of air.
Q.. Which out of K, Mg,Ca & Al form amphoteric oxide ?
Ans. form amphoteric oxide, i.e., acids as well as basic in nature.
Q.. Explain the following : Sodium wire is used to dry benzene but cannot be used to dry ethanol.
Ans. Sodium metal removes moisture from benzene by reacting with water. However, ethanol cannot be dried by using sodium because it reacts with sodium. $2 \mathrm{Na}+\mathrm{C}_{2} \mathrm{H}_{5} \mathrm{OH} \longrightarrow 2 \mathrm{C}_{2} \mathrm{H}_{5} \mathrm{ONa}+\mathrm{H}_{2}$
Q.. Why is $\mathrm{CaCl}_{2}$ added to $N a C l$ in extraction of $N a$ by Down cell ?
Ans. $\mathrm{CaCl}_{2}$ reduces melting point of $N a C l$ and increases electrical conductivity.
Q.. Carbon dioxide is passed through a suspension of limestone in water. Write balanced chemical equation for the above reaction.
Ans. $\mathrm{CaCO}_{3}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2} \longrightarrow \mathrm{Ca}\left(\mathrm{HCO}_{3}\right)_{2}$
Q.. What do we get when crystals of washing soda exposed to air ?
Ans. We get amorphous sodium carbonate becouse it loses water molecules.
Q.. $M g_{3} N_{2}$ when react with water gives off $\mathrm{NH}_{3}$ but $H C l$ is not obtained from $M g C l_{2}$on reaction with water at room temperature.
Ans. $M g_{3} N_{2}$ is a salt of a strong base, $M g(O H)_{2}$ and a weak acid $\left(N H_{3}\right)$ and hence gets hydrolysed to give $N H_{3} .$ In contrast, $M g C l_{2}$ is a salt of a strong base, and a strong acid, and hence does not undergo hydrolysis to give
Q.. Why caesium can be used in photoelectric cell while lithium cannot be ?
Ans. Caesium has the lowest while lithium has the highest ionization enthalpy among all the alkali metals. Hence, caesium can lose electron very easily while lithium cannot.
Q.. Which nitrates are used in pyrotechnics ?
Ans. Strontium and Barium nitrates are used in pyrotechnics for giving red and green flames.
Q..What are s-block elements ? Write their electronic configuration.
Ans. The elements in which the last electron enters the -orbital of their outermost energy level are called -block elements. It consists of Group 1 and Group 2 elements. Their electronic configuration is
Q.. Name the metals which are found in each of the following minerals : (i) Chile Salt Petre (ii) Marble (iii) Epsomite (iv) Bauxite.
Ans. (i) $\quad N a$(ii) $\mathrm{Ca}$ (iii) $M g \quad$ (iv) $\quad A l$
Q.. What is composition of Portland cement ? What is average composition of good quality cement ? [NCERT]
Ans. $\mathrm{CaO}=50$ to $60 \% \quad \mathrm{SiO}_{2}=20$ to $25 \%, \quad \mathrm{Al}_{2} \mathrm{O}_{3}=5$ to $10 \%$ $M g O=2$ to $3 \%, \quad F e_{2} O_{3}=1$ to $2 \%, \quad S O_{2}=1 \quad$ to $2 \% \quad$ is The ratio of $S i O_{2}$ (silica) to alumina $\left(A l_{2} O_{3}\right)$ should be between 2.5 and 4.0 and the ratio of lime $(\mathrm{CaO})$ to total oxides of silicon $\mathrm{SiO}_{2}, \mathrm{Al}_{2} \mathrm{O}_{3}$ and $\mathrm{Fe}_{2} \mathrm{O}_{3}$ should be as close to 2 as possible.
Q.. Write chemical reactions involved in Down process for obtaining Mg from sea water.
Q.. What is the mixture of $\mathrm{CaCN}_{2}$ and carbon called ? How is it prepared ? Give its use.
Q.. State the difficulties in extraction of alkaline earth metals from the natural deposits.
Ans. Like alkali metals, alkaline earth metals are also highly electropositive and strong reducing agents. Same difficulties, we face in the extraction of these metals. Therefore, these metals are extracted by electrolysis of their fused metal halides.
Q.. Starting with sodium chloride how would you proceed to prepare (state the steps only) : (i) sodium metal (ii) sodium hydroxide (iii) sodium peroxide (iv) sodium carbonate [NCERT]
Q..Write three general characteristics of the elements of s-block of the periodic table which distinguish them from the elements of the other blocks. [NCERT]
Ans. (i) They do not show variable oxidation states. (ii) They are soft metals having low melting and boiling point. (iii) They are highly electropositive and most reactive metals.
Q.. Why is $L i F$ almost insoluble in water whereas $L i C l$ is soluble not only in water but also in acetone ? [NCERT]
Ans. $L i F$ is an ionic compound containing small ions and hence has very high lattice enthalpy. Enthalpy of hydration in this case is not sufficient to compensate for high lattice enthalpy. Hence, is insoluble in water. has partial covalent character due to polarization of chloride ion by ion. Thus, has partial covalent character and partial ionic character and hence is soluble in water as well as less polar solvents such as acetone.
Q.. When an alkali metal dissolves in liquid ammonia the solution acquires different colours. Explain the reasons for this type of colour change. [NCERT]
Ans. When an alkali metal is dissolved in liquid ammonia it produces a blue coloured conducting solution due to formation of ammoniated cation and ammoniated electron as given below : When the concentration is above the colour of solution is copper-bronze. This colour change is because the concentrated solution contains clusters of metal ions and hence possess metallic lustre.
Q.. Arrange the following in the decreasing order of the property mentioned :
Q.. Alkali metals have low ionisation enthalpies.Why is it so?
Ans. Alkali metals have larger atomic size, therefore they have lower ionisation enthalpies. There is less force of attraction between valence electron and nucleus, therefore less energy is required to remove electron.
Q.. Which out of $L i, N a, K, B e, M g, C a$ has lowest ionisation enthalpy and why ?
Ans. $K^{\prime}$ has lowest ionisation energy due to larger atomic size among these elements. The force of attraction between valence electron and nucleus is less, therefore it can loose electron easily.
Q.. What is responsible for the blue colour of the solution of alkali metal in liquid ammonia ? Give chemical equation also.
Ans. The solvated electron, $e\left(N H_{3}\right)_{x}$ or ammoniated electron is responsible for blue colour of alkali metal solution in It absorbs light from visible regions and radiates complimentary colour: $2 N a(s)+2 N H_{3}(l) \longrightarrow 2 N a N H_{2}(s)+H_{2}(g)+e\left(N H_{3}\right)_{x}$
Q.. The alkali metals follow the noble gases in their atomic structure. What properties of these metals can be predicted from the information ?
Ans. (i) They form unipositive ions. (ii) Their second ionisation energy is very high. (iii) They have weak metallic bonds due to larger size and only one valence electron.
Q.. Comment on each of the following observations : (i) The mobilities of the alkali metal ions in aqueous solution are $L i^{+}
Ans. (i) $L i^{+}$ ions are smallest in size therefore most hydrated, that is why they have lowest mobility in aqueous solution. $C s^{+}$ ions are largest in size, least hydrated, therefore have highest mobility. Size of hydrated cation decreases, therefore, mobility of ions increases down the group. (ii) $L i$ is smallest in size and best reducing agent, therefore, it forms nitride with $N_{2}$ $6 L i+N_{2} \longrightarrow 2 L i_{3} N$ (Lithium nitride) (iii) It is due to less difference in their standard reduction potentials which is resultant of sublimation energy, ionisation energy and hydration energy. (iv) is least ionic as compared to other fluorides of alkali metals. It has high lattice energy, therefore, it is least soluble.
Q.. Why do alkali metals impart characteristic colours to the flame of a bunsen burner ? What is the colour imparted to the flame by each of the following metals ? Lithium, Sodium and Potassium.
Ans. When the alkali metal or any of its compounds is introduced into a flame, the electrons absorb energy from flame and get excited to higher energy levels. When these electrons come to ground state, the absorbed energy is given out in the form of radiations in the visible region. Lithium imparts caramine red, sodium gives golden yellow and potassium gives pink violet colour to the flame.
Q.. Which one of the alkaline earth metal carbonate is thermally most and least stable and why ?
Ans. $B a C O_{3}$ is thermally most stable due to greater ionic character and high lattice energy where as $B a C O_{3}$ is thermally least stable because it is covalent and has less lattice energy.
Q.. Commercial aluminium always contains some magnesium. Name two such alloys of aluminium. What properties are imparted by the addition of magnesium in these alloys ? [NCERT]
Ans. Duralium and Magnaliam are alloys of $\mathrm{Al}, \mathrm{Mg}$ is lighter in density than $A l$ therefore, it makes the alloys lighter. These alloys are used in automobile engines and aeroplanes.
Q.. Arrange the (i) hydroxides and (ii) Sulphates of alkaline earth metals in order of decreasing solubilities., giving a suitable reason for each.
Ans. $-B e(O H)_{2}M g S O_{4}>C a S O_{4}>S r S O_{4}>B a S O_{4}$ Solubility of sulphate goes on decreasing down the group because lattice energy dominates over hydration energy.
Q.. State the properties of beryllium different than other elements of the group.
Ans. Properties of beryllium different than other elements of the group (i) Beryllium is harder than other elements. (ii) Melting and boiling points are higher than that of other elements. (iii) It does not react with water but other elements do. (iv) It does not react with acids to form hydrogen. (v) It forms covalent compounds but others form ionic compounds. (vi) Beryllium oxide is amphoteric but other oxides are basic.
Q.. Mention the general trends in Group 1 and in Group 2 with increasing atomic number with respect to (i) density (ii) melting point (iii) atomic size (iv) ionization enthalpy. [NCERT]
Q.. How do the following properties change on moving from Group 1 to Group 2 in the periodic table ? (i) Atomic size (ii) Ionization enthalpy (iii) Density (iv) Melting points.
Ans. (i) Atomic size decrease from group 1 to group 2 due to increase in effective nuclear charge. (ii) First ionisation enthalpy increases from group 1 to group 2 due to decrease in atomic size. (iii) Density increases from group 1 to 2. (iv) Melting points increase from group 1 to 2.
Q.. Compare and contrast the chemistry of Group 1 metals with that of Group 2 metals with respect to (i) nature of oxides (ii) solubility and thermal stability of carbonates (iii) polarizing power of cations (iv) reactivity and reducing power.
Ans. (i) Group 1 metals form oxides of strong basic nature (except $L i_{2} O$. Group 2 metal form oxide of less basic nature. (ii) Carbonates : Carbonates of Group 1st are soluble and stable, solubility in case of Group 2nd decreases down the group. (iii) Polarizing power of cations increases. (iv) Reactivity and reducing power decreases.
Q.. Explain what happens when ? (i) Sodium hydrogen carbonate is heated (ii) Sodium amalgam reacts with water (iii) Fused sodium metal reacts with ammonia.
Ans. (i) On heating sodium bicarbonate it forms sodium carbonate \[ 2 \mathrm{NaHCO}_{3} \stackrel{\Delta}{\longrightarrow} \mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O}+\mathrm{CO}_{2}\] (ii) When sodium amalgam, $N a / H g,$ is formed the vigourosity of reaction of sodium with water decreases. \[ 2 \mathrm{Na} / \mathrm{Hg}+2 \mathrm{H}_{2} \mathrm{O} \longrightarrow \mathrm{NaOH}+\mathrm{H}_{2} \] (iii) Sodium reacts with ammonia to form amide. \[2 \mathrm{Na}+2 \mathrm{NH}_{3} \stackrel{\Delta}{\longrightarrow} 2 \mathrm{NaNH}_{2}+\mathrm{H}_{2} \]
Q.. State as to why ? (i) An aqueous solution of sodium carbonate give alkaline tests. $\mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O} \longrightarrow \mathrm{Na}^{+}+\mathrm{OH}^{-}+\mathrm{HCO}_{3}^{-}$ (ii) Sodium is prepared by electrolytic method and not by chemical method.
Ans. (i) An aqueous solution of sodium carbonate has a large concentration of hydroxyl ions making it alkaline in nature. (ii) Sodium is a very strong reducing agent therefore it cannot be extracted by the reduction of its ore (chemical method). Thus the best way to prepare sodium is by carrying electrolysis of its molten salts containing impurities of calcium chloride.
Q.. Explain why : (i) Lithium on being heated in air mainly forms the monoxide and not peroxide. (ii) $K, R b$ and $C s$ on being heated in the presence of excess supply of air form superoxides in preference to oxides and peroxides. (iii) An aqueous solution of sodium carbonate is alkaline in nature.
Ans. (i) $L i^{+}$ ions is smaller in size. It is stabilized more by smaller anion, oxide ion $\left(O^{2-}\right)$ as compared to larger anion, peroxide ion $\left(O_{2}^{2-}\right)$. (ii) $\quad K^{+}, R b^{+}, C s^{+},$ are large cations. A large cation is more stabilized by large anions. since superoxide ion, $O_{2}^{-}$ is quite large, $K, R b$ and $C s$ form superoxides in preference to oxides and peroxides. (iii) In aqueous solution sodium carbonate undergoes hydrolysis forming sodium hydroxide. $\mathrm{Na}_{2} \mathrm{CO}_{3}+\mathrm{H}_{2} \mathrm{O} \longrightarrow \mathrm{NaHCO}_{3}+\mathrm{NaOH}$
Q.. Write balanced equations or reactions between :
Q.. Name an alkali metal carbonate which is thermally unstable and why ? Give its decomposition reaction.
Ans. $L i_{2} C O_{3}$ is thermally unstable because it is covalent. It decomposes to form $L i_{2} O$ and $C O_{2}$ $L i_{2} C O_{3} \stackrel{\Delta}{\longrightarrow} L i_{2} O+C O_{2}$
Q.. Why are ionic hydrides of only alkali metals and alkaline earth metals are known ? Give two examples.
Ans. Alkali metals and alkaline earth metals are most electropositive due to low ionisation energy or enthalpy therefore, they can form ionic hydrides e.g., $\mathrm{NaH}, \mathrm{KH}$ and $\mathrm{CaH}_{2}$
Q.. Why does the following reaction : proceed better with $K F$ than with $N a F ?$
Ans. It is because $K F$ is more ionic than $N a F$
Q.. The enthalpy of formation of hypothetical $\operatorname{CaCl}(s)$ theoretically found to be equal to $188 \mathrm{kJ} \mathrm{mol}^{-1}$ and the $\Delta H_{f}^{\circ}$ for $C a C l_{2}(s)$ is $-795 k J m o l^{-1} .$ Calculate the $\Delta H^{\circ}$ for the disproportionation reaction.
Q.. Why is it that the s-block elements never occur in free state in nature ? What are their usual modes of occurrence and how are they generally prepared ? [NCERT]
Ans. s-block elements are highly reactive, therefore they never occur in free state rather occur in combined state in the form of halides, carbonates, sulphates. They are generally prepared by electrolysis of their molten salts
Q.. Name the chief form of occurrence of magnesium in nature. How is magnesium extracted from one of it ores ? [NCERT]
Ans. mg occurs in the form of $M g C l_{2}$ in sea water from which it can be extracted. Sea water containing $\mathrm{MgCl}_{2}$ is concentrated under the sun and is treated with $\mathrm{Ca}(\mathrm{OH})_{2} \cdot \mathrm{Mg}(\mathrm{OH})_{2}$ is thus precipitated, filtered and heated to give the oxide. The oxide is treated with $\mathrm{C}$ and $\mathrm{Cl}_{2}$ to get $\mathrm{MgCl}_{2}$. $\mathrm{MgO}+\mathrm{C}+\mathrm{Cl}_{2} \stackrel{\text { heat }}{\longrightarrow} \mathrm{MgCl}_{2}+\mathrm{CO}$ $\mathrm{MgCl}_{2}$ is fused with $\mathrm{NaCl}$ and $\mathrm{CaCl}_{2}$ at $970-1023 \mathrm{K}$ and molten mixture is electrolysed. Magnesium is liberated at cathode and Chlorine is evolved at the anode. At Cathode : $\mathrm{Mg}^{2+}+2 \mathrm{e}^{-} \longrightarrow \mathrm{Mg}$ At Anode : $2 \mathrm{Cl}^{-} \longrightarrow \mathrm{Cl}_{2}+2 \mathrm{e}^{-}$ A steam of coal gas is blown through the cell to prevent oxidation of Mg metal.
Q.. How is pure magnesium prepared from sea water ? What happens when mg is burned in air ? Write chemical equations of reactions involved.
Ans. Sea water contains $M g C l_{2}$ which is concentrated under the sun and is treated with calcium hydroxide, $\mathrm{Ca}(\mathrm{OH})_{2}$ , magnesium hydroxide is thus precipitated, filtered and heated to give oxide. The oxide is treated with carbon and $C l_{2}$ to get $M g C l_{2}$. $M g O+C+C l_{2} \longrightarrow M g C l_{2}+C O$ is mixed with sodium chloride so as to reduce its melting point and increase its electrical conductivity. Molten mixture is electrolysed using steel cathode and carbon anode. A steam of coal gas is blown through the cell to prevent oxidation. of metal. At cathode Mg't $+2 \mathrm{e}^{-} \longrightarrow \mathrm{Mg}$ At anode $\quad 2 \mathrm{Cl}^{-}-2 \mathrm{e}^{-} \longrightarrow \mathrm{Cl}_{2}$ mg obtained in liquid state is further distilled to give pure mg When burns in air, it forms magnesium oxide and magnesium nitride. $3 M g+N_{2} \longrightarrow M g_{3} N_{2} ; 2 \mathrm{Mg}+\mathrm{O}_{2} \longrightarrow 2 \mathrm{MgO}$
Q.. (i) Draw a neat and labelled diagram of Castner-Kellner cell for the manufacture of caustic soda. (ii) Give chemical equations of the reaction of caustic soda with (a) ammonium chloride, and (b) carbon dioxide.
Q.. List three properties of lithium in which it differs from the rest of the alkali metals. [NCERT]
Ans. (i) Lithium reacts with $N_{2}$ to form Lithium nitride whereas other alkali metals do not react with . (ii) Lithium forms monoxide whereas other elements form peroxide and super oxide. (iii) Lithium predominently forms covalent compounds whereas others form ionic compounds.
Q.. What happens when : (i) sodium metal is dropped in water. (ii) sodium metal is heated in free supply of air. (iii) sodium peroxide dissolves in water.
Ans. (i) $2 N a+2 H_{2} O \longrightarrow 2 N a O H+H_{2}$ ; sodium hydroxide is formed with evolution of $H_{2}(g)$ The hydrogen gas catches fire due to highly exothermic process. (ii) $\quad 2 \mathrm{Na}+\mathrm{O}_{2} \longrightarrow \mathrm{Na}_{2} \mathrm{O}_{2}$ sodium peroxide is formed. (iii) Sodium hydroxide and hydrogen peroxide are formed. $\mathrm{Na}_{2} \mathrm{O}_{2}+2 \mathrm{H}_{2} \mathrm{O} \longrightarrow 2 \mathrm{NaOH}+\mathrm{H}_{2} \mathrm{O}_{2}$
Q.. The hydroxides and carbonates of sodium and potassium are easily soluble in water while the corresponding salts of magnesium and calcium are sparingly soluble in water. Explain.
Ans. The lattice enthalpies of hydroxides and carbonates of magnesium and calcium are very high due to the presence of divalent cations. The enthalpy of hydration cannot compensate for the energy required to break the lattice in these compounds. Hence, they are sparingly soluble in water. On the otherhand, the lattice enthalpies of hydroxides and carbonates of sodium and potassium are low due to the presence of monovalent cations. The enthalpy of hydration in this case is sufficient to break the lattice in these compounds. Hence, hydroxides and carbonates of sodium and potassium are easily soluble in water.
Q.. What happens when : (i) magnesium is burnt in air (ii) quicklime is heated with silica (iii) chlorine reacts with slaked lime (iv) calcium nitrate is heated [NCERT]
Q.. Like lithium in Group 1, beryllium shows anomalous behaviour in Group 2. Write three such properties of beryllium which make it anomalous in the group.
Ans. (i) Be forms amphoteric oxide whereas others form basic acids. (ii) $B e C l_{2}$ is covalent, others form ionic halides. (iii) does not react even with hot water where as others react easily with water. has smallest atomic size, highest ionisation energy, high polarising power which makes it anomalous in this group.
Q.. Beryllium exhibits some similarities with aluminium. Point out three such properties. [NCERT]
Ans. (i) $\quad B e O$ and $A l_{2} O_{3}$ are amphoteric. (ii) $\quad$ Be and $A l$ react with $\mathrm{NaOH}$ to form $\left[\mathrm{Be}(\mathrm{OH})_{2}\right]^{-2}$ and $\left[A l(O H)_{6}\right]^{3-}$ (iii) $\quad B e_{2} C$ and $A l_{4} C_{3}$ react with $H_{2} O$ to form methane.
Q.. Compare the solubility and thermal stability of the following compounds of the alkali metals with those of the alkaline earth metals : (i) nitrates (ii) carbonates (iii) sulphates.
Ans. (i) Nitrates of alkali metals are more stable than that of alkaline earth metals. All nitrates are soluble in water. (ii) Carbonates of alkali metals are more stable and more soluble in water than that of alkaline earth metals. (iii) Sulphates of alkali metals are more stable and more soluble in water than that of alkaline earth metal.
Q.. Discuss the anomalous behaviour of Lithium. Give its diagonal relationship with Magnesium.
Ans. $L i$ shows anomalous behaviour due to smallest size, highest ionisation energy and highest polarising power Resemblance with : (i) Both react with $N_{2}$ to form nitrides. (ii) Both form predominantly covalent compounds (iii) Both form monoxide (iv) Carbonates of both are thermally unstable.
Q.. State, why : (i) A solution of $\mathrm{Na}_{2} \mathrm{CO}_{3}$ is alkaline. (ii) Alkali metals are prepared by electrolysis of their fused chlorides. (iii) Sodium is found more useful than potassium.
Ans. (i) $\mathrm{Na}_{2} \mathrm{CO}_{3}$ is alkaline due to its hydrolysis, it forms $O H^{-}$ more than $H^{+}$ because $H_{2} C O_{3}$ is weak acid, therefore, is alkaline. (ii) Alkali metals are strong reducing agents and highly reactive with water, therefore, they are prepared by electrolysis of their fused Chlorides. (iii) Sodium is more useful than potassium because sodium is less reactive and found in more abundance than K
Q.. Contrast the action of heat on the following : (i) $\quad \mathrm{Na}_{2} \mathrm{CO}_{3}$ and $\mathrm{CaCO}_{3}$ (ii) $\quad M g C l_{2} \cdot 6 H_{2} O$ and $C a C l_{2} \cdot 6 H_{2} O$ (iii) $\quad \mathrm{Ca}\left(\mathrm{NO}_{3}\right)_{2}$ and $\mathrm{NaNO}_{3}$
Ans. (i) Sodium carbonate does not decompose whereas calcium carbonate decomposes on heating. $\quad \mathrm{CaCO}_{3} \stackrel{\Delta}{\longrightarrow} \mathrm{CaO}+\mathrm{CO}_{2}$ (ii) $\mathrm{MgCl}_{2} \cdot 6 \mathrm{H}_{2} \mathrm{O} \stackrel{\Delta}{\longrightarrow} \mathrm{MgO}+2 \mathrm{HCl}+5 \mathrm{H}_{2} \mathrm{O}$ $\mathrm{CaCl}_{2} \cdot 6 \mathrm{H}_{2} \mathrm{O} \longrightarrow \mathrm{CaCl}_{2}+6 \mathrm{H}_{2} \mathrm{O}$ (iii) $2 \mathrm{Ca}\left(\mathrm{NO}_{3}\right)_{2} \stackrel{\Delta}{\longrightarrow} 2 \mathrm{CaO}+4 \mathrm{NO}_{2}+\mathrm{O}_{2}$ $2 \mathrm{NaNO}_{3} \stackrel{\Delta}{\longrightarrow} 2 \mathrm{NaNO}_{2}+\mathrm{O}_{2} |$
Q.. What happens when : (i) Carbon dioxide gas is passed through an aqueous solution of sodium carbonate. (ii) Potassium carbonate is heated with milk of lime. (iii) Lithium nitrate is heated. Give chemical equation for the reactions involved.
Q.. What happen when (i) Magnesium is burnt in air (ii) Quick lime is heated with silica (iii) Chlorine reacts with slaked lime (iv) Calcium nitrate is heated
Q.. 'The chemistry of beryllium is not essentially ionic'. Justify the statement by making a reference to the nature of oxide, chloride and fluoride of beryllium. [NCERT]
Ans. Be predominantly forms covalent compounds due to smaller size, higher ionisation energy and high polarising power. $B e O, B e C l_{2}$ and $B e F_{2}$ are covalent and get hydrolysed by water. BeO is least soluble in water due to covalent character. $B e O+H_{2} O \longrightarrow B e(O H)_{2}$ $B e C l_{2}+2 H_{2} O \longrightarrow B e(O H)_{2}+2 H C l$ $B e F_{2}+2 H_{2} O \longrightarrow B e(O H)_{2}+2 H F$ They are less soluble in water but more soluble in organic solvents, which shows they are covalent in nature.
Q.. Compare and contrast the chemistry of group 1 metals with that of group 2 metals with respect to : (i) nature of oxides (ii) solubility and thermal stability of carbonates (iii) polarizing power of cations (iv) reactivity and reducing power.
Ans. (i) Oxides of group 1 elements are more basic than that of group 2. (ii) Solubility and thermal stability of carbonates of group 1 is higher than that of group 2. (iii) Polarizing power of cations of group 2 is higher than that of group 1. (iv) Reactivity and reducing power of group 1 elements is higher than corresponding group 2 elements.
Q.. Describe the importance of the following in different areas : (i) limestone (ii) cement (iii) plaster of paris.
Ans. (i) Lime stone : (a) It is used in manufacture of glass and cement. (b) It is used as flux in extraction of iron. (ii) Cement : (a) It is used as building material. (b) It is used in concrete and reinforced concrete, in plastering and in construction of bridges, dams and buildings. (iii) Plaster of Paris : (a) It is used for manufacture of chalk. (b) It is used for plastering fractured bones. (c) It is used for making casts and moulds. (d) It is also used in dentistry, in ornaments work and for taking casts of statues.
Q.. Describe the general characteristics of group1 elements.
Ans. (i) Group-I consists of lithium, sodium, potassium, rubidium, caesium and francium. (ii) Elements have electronic configuration $n s^{1}$ (iii) Elements have 1 electron in the outermost s-orbital and have a strong tendency to lose this electron, so : (a) These are highly electropositive metals. (b) Never found in free state due to their high reactivity. (c) They form $M^{+}$ ion. (iv) Atomic radii : Atomic radii of alkali metals are largest in their respective periods. (v) Density : Their densities are quite low. Lithium is the lightest known metal. (vi) Oxidation state : The alkali metal atoms show only +1 oxidation state. (vii) Reducing agents : Due to very low value of ionisation energy, alkali metals are strong reducing agents and reducing character increase $N a$ to $C s$ but $L$ but is the strongest reducing agent. (viii) When alkali metals are heated in air, Lithium forms normal oxide $\left(L i_{2} O\right)$ sodium forms peroxide $\left(N a_{2} O_{2}\right)$potassium, rubidium and caesium form superoxides with peroxides. (ix) The alkali metals form hydrides of type reacting with hydrogen at about $673 K \quad \text { (Li forms hydride at } 1073 K)$ forms hydride at Ionic character of hydrides increases from to
Q.. Describe the manufacture process of sodium-carbonate.
Ans. Sodium carbonate $\mathrm{Na}_{2} \mathrm{CO}_{3} \cdot \mathrm{H}_{2} \mathrm{O}$ or washing soda is manufactured by solvay-process. Principle of process : Carbondioxide gas is passed through brine solution (about 28% $N a C l$ saturated with ammonia, sodium carbonate is formed. Plant used for the manufacture of washing soda. Process : It completes in the following steps : (i) Saturation of brine with ammonia : Lime stone (Calcium carbonate is strongly heated to form carbon dioxide. Ammonia and carbondioxide mixture is passed through a tower in which saturated brine is poured down. Ammoniated brine is filtered to remove impurities of calcium and magnesium carbonate. (ii) The milky solution is removed and filtered with the help of a vacuum pump. (iii) Ammonia recovery tower : The filtrate is step 2 is mixed with calcium hydroxide and heated with steam. Ammonia obtained is recycled with carbondioxides. Potassium carbonate cannot be prepared by Solvay process as the solubility of $\mathrm{KHCO}_{3}$ is fairly large as compared to $\mathrm{NaHCO}_{3}$
Q.. Give chemical equations for the various reactions taking place during the manufacture of washing soda by Solvay's process. What are the raw materials used in this process ? What is the by-product in this process ?
Ans. The raw materials used in this process are sodium chloride and lime stone Calcium chloride is the by-product in this process.
Q.. Describe three industrial uses of caustic soda. Describe one method of manufacture of sodium hydroxide. What happens when sodium hydroxides reacts with (i) aluminium metal (ii) $\mathrm{CO}_{2}$ (iii) $\mathrm{SiO}_{2} ?$
Ans. Industrial uses of caustic soda. (i) It is used for manufacture of soap. (ii) It is used in paper industry. (iii) It is used in textile industries. Sodium hydroxide can be prepared by electrolysis of saturted solution of brine $(N a C l)$ (iii)
Q.. (i) How is plaster of Paris prepared ? Describe its chief property due to which it is widely used. (ii) How would you explain ? (a) $\quad$ BeO is insoluble but $B e S O_{4}$ in soluble in water. (b) $\quad B a O$ is soluble but $B a S O_{4}$ is insoluble in water. (c) $\quad$ Lil is more soluble than $K I$. (d) $\quad \mathrm{NaHCO}_{3}$ is known in solid state but $\mathrm{Ca}\left(\mathrm{HCO}_{3}\right)_{2}$ is not isolated in solid state. \ [NCERT]
Q.. (i) Name an element which is invariable bivalent and whose oxide is soluble in excess of $N a O H$ and its dipositive ion has a noble gas core. (ii) Differentiate between (a) quick-lime (b) lime-water (c) slaked-lime.
Ans. (i) Beryllium : It is invariably divalent. Its oxide is soluble in and its dipositive ion has a noble gas core. (ii) Quick lime : It is calcium oxide $(C a O)$ It is produced by heating $\mathrm{CaCO}_{3}$ Lime water : The clear aqueous solution of Calcium hydroxide in water is called lime water. It is formed when $\mathrm{Ca}(\mathrm{OH})_{2}$ is dissolved in excess of $H_{2} O$ Slaked Lime : Calcium hydroxide solid is known as Slaked Lime. It is produced when water is added to $\mathrm{CaO}$
Q.. What is the effect of heat on the following compounds ? (Write equations for the reactions). (i) Calcium carbonate. (ii) Magnesium chloride hexahydrate. (iii) Gypsum. (iv) Magnesium sulphate heptahydrate.
Ans. (i) Action of heat on calcium carbonate : When calcium carbonate is heated, a colourless gas carbon dioxide is given out and a white residue of calcium oxide is left behind. (ii) Action of heat on magnesium chloride hexahydrate : It loses water and hydrogen chloride and yields a residue of magnesium oxide. (iii) Action of heat on gypsum : It forms a hemi-hydrate, called plaster of paris. (iv) Action of heat on magnesium sulphate heptahydrate : It loses water of crystallization and forms anhydrous salt.
Q.. (i) Describe two important uses of each of the following: (a) caustic soda (b) sodium carbonate (c) quicklime.
Ans. (i) (a) Caustic soda is used in prepartion of pure fats and oils. It is also used for preparation of rayon (artificial silk). (b) Sodium carbonate is used for manufacture of glass. It is used in softening of hard water. (c) Quick lime is used for white washing. It is used for manufacturing of glass and cement.
[Polymeric structure of $\left.B e C l_{2}\right]$ in solid state
s block elements class 11 important questions with answers pdf s block elements class 11 important questions pdf s block elements class 11 important questions
Student - March 9, 2022, 9:11 p.m.
Faltu - March 9, 2022, 9:09 p.m.
Nav - Feb. 24, 2022, 1:06 p.m.
Babli - May 13, 2021, 10:27 a.m.
Vvvvv nice 👌👌👌👌👌
oilless compressor - April 13, 2021, 10:29 a.m.
I loved your blog and thanks for publishing this about s block elements class 11 important questions with answers! I am really happy to come across this exceptionally well written content. Thanks for sharing and look for more in future!! Keep doing this inspirational work and share with us. I have also found this resource https://peerlessengineering.com/ useful and its related to what you are mentioning.
Student - March 10, 2021, 12:07 a.m.
Thank you, helped in last minute preparation.
Priyanka - Jan. 16, 2021, 7:46 p.m.
It is very good questions | CommonCrawl |
Zadoff–Chu sequence
A Zadoff–Chu (ZC) sequence, also referred to as Chu sequence or Frank–Zadoff–Chu (FZC) sequence,[1]: 152 is a complex-valued mathematical sequence which, when applied to a signal, gives rise to a new signal of constant amplitude. When cyclically shifted versions of a Zadoff–Chu sequence are imposed upon a signal the resulting set of signals detected at the receiver are uncorrelated with one another.
They are named after Solomon A. Zadoff, David C. Chu and Robert L. Frank.
Description
Zadoff–Chu sequences exhibit the useful property that cyclically shifted versions of themselves are orthogonal to one another.
A generated Zadoff–Chu sequence that has not been shifted is known as a root sequence.
The complex value at each position n of each root Zadoff–Chu sequence parametrised by u is given by
$x_{u}(n)={\text{exp}}\left(-j{\frac {\pi un(n+c_{\text{f}}+2q)}{N_{\text{ZC}}}}\right),\,$
where
$0\leq n<N_{\text{ZC}}$,
$0<u<N_{\text{ZC}}$ and ${\text{gcd}}(N_{\text{ZC}},u)=1$,
$c_{\text{f}}=N_{\text{ZC}}\mod 2$,
$q\in \mathbb {Z} $,
$N_{\text{ZC}}={\text{length of sequence}}$.
Zadoff–Chu sequences are CAZAC sequences (constant amplitude zero autocorrelation waveform).
Note that the special case $q=0$ results in a Chu sequence,[1]: 151 . Setting $q\neq 0$ produces a sequence that is equal to the cyclically shifted version of the Chu sequence by $q$, and multiplied by a complex, modulus 1 number, where by multiplied we mean that each element is multiplied by the same number.
Properties of Zadoff-Chu sequences
1. They are periodic with period $N_{\text{ZC}}$ if $N_{\text{ZC}}$ is odd.
$x_{u}(n+N_{\text{ZC}})=x_{u}(n)$
2. If $N_{\text{ZC}}$ is prime, the Discrete Fourier Transform of a Zadoff–Chu sequence is another Zadoff–Chu sequence conjugated, scaled and time scaled.
$X_{u}[k]=x_{u}^{*}({\tilde {u}}k)X_{u}[0]$ where ${\tilde {u}}$ is the multiplicative inverse of u modulo $N_{\text{ZC}}$.
3. The auto correlation of a Zadoff–Chu sequence with a cyclically shifted version of itself is zero, i.e., it is non-zero only at one instant which corresponds to the cyclic shift.
4. The cross-correlation between two prime length Zadoff–Chu sequences, i.e. different values of $u,u=u_{1},u=u_{2}$, is constant $1/{\sqrt {N_{\text{ZC}}}}$, provided that $u_{1}-u_{2}$ is relatively prime to $N_{\text{ZC}}$.[2]
Usages
Zadoff–Chu sequences are used in the 3GPP Long Term Evolution (LTE) air interface in the Primary Synchronization Signal (PSS), random access preamble (PRACH), uplink control channel (PUCCH), uplink traffic channel (PUSCH) and sounding reference signals (SRS).
By assigning orthogonal Zadoff–Chu sequences to each LTE eNodeB and multiplying their transmissions by their respective codes, the cross-correlation of simultaneous eNodeB transmissions is reduced, thus reducing inter-cell interference and uniquely identifying eNodeB transmissions.
Zadoff–Chu sequences are an improvement over the Walsh–Hadamard codes used in UMTS because they result in a constant-amplitude output signal, reducing the cost and complexity of the radio's power amplifier.[3]
See also
• Polyphase sequence
References
1. Zepernick, Hans-Jürgen; Finger, Adolf (2005). Pseudo Random Signal Processing: Theory and Application. Wiley. ISBN 978-0-470-86657-3.
2. Popovic, B.M. (1992). "Generalized Chirp-Like polyphase sequences with optimum correlation properties". IEEE Trans. Inf. Theory. 38 (4): 1406–9. doi:10.1109/18.144727.
3. Song, Lingyang; Shen, Jia, eds. (2011). Evolved Cellular Network Planning and Optimization for UMTS and LTE. New York: CRC Press. ISBN 978-1439806500.
Further reading
• Frank, R. L. (Jan 1963). "Polyphase codes with good nonperiodic correlation properties". IEEE Trans. Inf. Theory. 9 (1): 43–45. doi:10.1109/TIT.1963.1057798.
• Chu, D. C. (July 1972). "Polyphase codes with good periodic correlation properties". IEEE Trans. Inf. Theory. 18 (4): 531–532. doi:10.1109/TIT.1972.1054840.
• S. Beyme and C. Leung (2009). "Efficient computation of DFT of Zadoff-Chu sequences". Electron. Lett. 45 (9): 461–463. doi:10.1049/el.2009.3330.
| Wikipedia |
\begin{document}
\title{A Fractional Calculus on Arbitrary Time Scales:\\ Fractional Differentiation and Fractional Integration\thanks{Part of first author's Ph.D., which is carried out at Sidi Bel Abbes University, Algeria.} \thanks{This is a preprint of a paper whose final and definite form will appear in the international journal \emph{Signal Processing}, ISSN 0165-1684. Paper submitted 04/Jan/2014; revised 19/Apr/2014; accepted for publication 12/May/2014.}}
\author{Nadia Benkhettou$^1$\\ \texttt{benkhettou$_{-}[email protected]} \and Artur M. C. Brito da Cruz$^{2, 3}$\\ \texttt{[email protected]} \and Delfim F. M. Torres$^3$\thanks{Corresponding author. Tel: +351 234370668; Fax: +351 234370066; Email: [email protected]}\\ \texttt{[email protected]}}
\date{$^1$Laboratoire de Math\'{e}matiques, Universit\'{e} de Sidi Bel-Abb\`{e}s\\ B.P. 89, 22000 Sidi Bel-Abb\`{e}s, Algerie\\[0.3cm] $^2$Escola Superior de Tecnologia de Set\'{u}bal\\ Estefanilha, 2910-761 Set\'{u}bal, Portugal\\[0.3cm] $^3$\text{Center for Research and Development in Mathematics and Applications (CIDMA)}\\ Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal}
\maketitle
\begin{abstract} We introduce a general notion of fractional (noninteger) derivative for functions defined on arbitrary time scales. The basic tools for the time-scale fractional calculus (fractional differentiation and fractional integration) are then developed. As particular cases, one obtains the usual time-scale Hilger derivative when the order of differentiation is one, and a local approach to fractional calculus when the time scale is chosen to be the set of real numbers.
\noindent \textbf{Keywords:} fractional differentiation, fractional integration, calculus on time scales.
\noindent \textbf{2010 Mathematics Subject Classification:} 26A33, 26E70. \end{abstract}
\section{Introduction}
Fractional calculus refers to differentiation and integration of an arbitrary (noninteger) order. The theory goes back to mathematicians as Leibniz (1646--1716), Liouville (1809--1882), Riemann (1826--1866), Letnikov (1837--1888), and Gr\"{u}nwald (1838--1920) \cite{book:Kilbas,book:Samko}. During the last two decades, fractional calculus has increasingly attracted the attention of researchers of many different fields \cite{book:Benchohra,MR2870885,Baleanu:Nigmatullin,TM:K:M,book:FCV,MR2090004,book:Ortigueira,Yang:Baleanu:etal}.
Several definitions of fractional derivatives/integrals have been defined in the literature, including those of Riemann--Liouville, Gr\"{u}nwald--Letnikov, Hadamard, Riesz, Weyl and Caputo \cite{book:Kilbas,Ortigueira:Trujillo:2012,book:Samko}. In 1996, Kolwankar and Gangal proposed a local fractional derivative operator that applies to highly irregular and nowhere differentiable Weierstrass functions \cite{MR1911751,K:G:96}. Here we introduce the notion of fractional derivative on an arbitrary time scale $\mathbb{T}$ (cf. Definition~\ref{def:fd:ts}). In the particular case $\mathbb{T} = \mathbb{R}$, one gets the local Kolwankar--Gangal fractional derivative $\lim_{h \rightarrow 0} \frac{f(t+h) - f(t)}{h^\alpha}$, which has been considered in \cite{K:G:96,K:G:97} as the point of departure for fractional calculus. One of the motivations to consider such local fractional derivatives is the possibility to deal with irregular signals, so common in applications of signal processing \cite{K:G:97}.
A time scale is a model of time. The calculus on time scales was initiated by Aulbach and Hilger in 1988 \cite{MR1062633}, in order to unify and generalize continuous and discrete analysis \cite{H2,H1}. It has a tremendous potential for applications and has recently received much attention \cite{ABRP,BP,BP1,CK,MyID:252}. The idea to join the two subjects --- the fractional calculus and the calculus on time scales --- and to develop a \emph{Fractional Calculus on Time Scales}, was born with the Ph.D. thesis of Bastos \cite{PhD:Nuno}. See also \cite{MR2933070,PhD:Auch,Nuno:Z,Nuno:hZ,Nuno:Lap,Kisela,Rib:Ant,PhD:Williams} and references therein. Here we introduce a general fractional calculus on time scales and develop some of its basic properties.
Fractional calculus is of increasing importance in signal processing \cite{book:Ortigueira}. This can be explained by several factors, such as the presence of internal noises in the structural definition of the signals. Our fractional derivative depends on the graininess function of the time scale. We trust that this possibility can be very useful in applications of signal processing, providing a concept of coarse-graining in time that can be used to model white noise that occurs in signal processing or to obtain generalized entropies and new practical meanings in signal processing. Indeed, let $\mathbb{T}$ be a time scale (continuous time $\mathbb{T} = \mathbb{R}$, discrete time $\mathbb{T} = h \mathbb{Z}$, $h > 0$, or, more generally, any closed subset of the real numbers, like the Cantor set). Our results provide a mathematical framework to deal with functions/signals $f(t)$ in signal processing that are not differentiable in the time scale, that is, signals $f(t)$ for which the equality $\Delta f(t) = f^\Delta(t) \Delta t$ does not hold. More precisely, we are able to model signal processes for which $\Delta f(t) = f^{(\alpha)}(t) (\Delta t)^\alpha$, $0 < \alpha \le 1$.
The time-scale calculus can be used to unify discrete and continuous approaches to signal processing in one unique setting. Interesting in applications, is the possibility to deal with more complex time domains. One extreme case, covered by the theory of time scales and surprisingly relevant also for the process of signals, appears when one fix the time scale to be the Cantor set \cite{Baleanu:TM:etal,Yang:Srivastava:etal}. The application of the local fractional derivative in a time scale different from the classical time scales $\mathbb{T} = \mathbb{R}$ and $\mathbb{T} = h \mathbb{Z}$ was proposed by Kolwankar and Gangal themselves: see \cite{K:G:97,K:G:98} where nondifferentiable signals defined on the Cantor set are considered.
The article is organized as follows. In Section~\ref{sec:prelim} we recall the main concepts and tools necessary in the sequel. Our results are given in Section~\ref{sec:MR}: in Section~\ref{sub:sec:FD} the notion of fractional derivative for functions defined on arbitrary time scales is introduced and the respective fractional differential calculus developed; the notion of fractional integral on time scales, and some of its basic properties, is investigated in Section~\ref{sub:sec:FI}. We end with Section~\ref{sec:Conc} of conclusions and future work.
\section{Preliminaries} \label{sec:prelim}
A time scale $ \mathbb{T}$ is an arbitrary nonempty closed subset of $ \mathbb{R}$. Here we only recall the necessary concepts of the calculus on time scales. The reader interested on the subject is referred to the books \cite{BP,BP1}. For a good survey see \cite{ABRP}.
\begin{definition} \label{def:jump:oper} Let $\mathbb{T}$ be a time scale. For $t \in \mathbb{T}$ we define the forward jump operator $\sigma:\mathbb{T}\rightarrow \mathbb{T}$ by $\sigma(t):=\inf\{s \in\mathbb{T} : s > t\}$, and the backward jump operator $\rho:\mathbb{T}\rightarrow \mathbb{T}$ by $\rho(t):=\sup\{s \in\mathbb{T} : s < t\}$. \end{definition}
\begin{remark} In Definition~\ref{def:jump:oper}, we put $\inf \emptyset =\sup \mathbb{T}$ (i.e., $\sigma(t)= t$) if $\mathbb{T}$ has a maximum $t$, and $\sup \emptyset =\inf \mathbb{T}$ (i.e., $\rho(t)= t$) if $\mathbb{T}$ has a minimum $t$, where $\emptyset$ denotes the empty set. \end{remark}
If $\sigma(t) > t$, then we say that $t$ is right-scattered; if $\rho(t) < t$, then $t$ is said to be left-scattered. Points that are simultaneously right-scattered and left-scattered are called isolated. If $t < \sup\mathbb{T}$ and $\sigma(t) = t$, then $t$ is called right-dense; if $t >\inf \mathbb{T}$ and $\rho(t)= t$, then $t$ is called left-dense. The graininess function $\mu :\mathbb{T}\rightarrow [0,\infty)$ is defined by $\mu(t) :=\sigma(t) - t$.
We make use of the set $\mathbb{T}^{\kappa}$, which is derived from the time scale $\mathbb{T}$ as follows: if $\mathbb{T}$ has a left-scattered maximum $M$, then $\mathbb{T}^{\kappa}=\mathbb{T} \setminus \{M\}$; otherwise, $\mathbb{T}^{\kappa}=\mathbb{T}$.
\begin{definition}[Delta derivative \cite{AB}] Assume $f:\mathbb{T}\rightarrow \mathbb{R}$ and let $t\in \mathbb{T}^{\kappa}$. We define $$ f^{\Delta}(t)=\lim_{s\rightarrow t}\frac{f(\sigma(s))-f(t)}{\sigma(s)-t}, \quad t \neq \sigma(s), $$ provided the limit exists. We call $f^{\Delta}(t)$ the delta derivative (or Hilger derivative) of $f$ at $t$. Moreover, we say that $f$ is delta differentiable on $\mathbb{T}^{\kappa}$ provided $f^{\Delta}(t)$ exists for all $t\in \mathbb{T}^{\kappa}$. The function $f^{\Delta}:\mathbb{T}^{\kappa}\rightarrow \mathbb{R}$ is then called the delta derivative of $f$ on $\mathbb{T}^{\kappa}$. \end{definition}
Delta derivatives of higher-order are defined in the usual way. Let $r\in\mathbb{N}$, $\mathbb{T}^{\kappa^{0}} := \mathbb{T}$, and $\mathbb{T}^{\kappa^i}:=\left(\mathbb{T}^{\kappa^{i-1}}\right)^\kappa$, $i = 1, \ldots, r$. For convenience we also put $f^{\Delta^0} = f$ and $f^{\Delta^1} = f^\Delta$. The $r$th-delta derivative $f^{\Delta^r}$ is given by $f^{\Delta^r} = \left(f^{\Delta^{r-1}}\right)^\Delta: \mathbb{T}^{\kappa^r} \rightarrow \mathbb{R}$ provided $f^{\Delta^{r-1}}$ is delta differentiable.
The following notions will be useful in connection with the fractional integral (Section~\ref{sub:sec:FI}).
\begin{definition} A function $f:\mathbb{T} \rightarrow \mathbb{R}$ is called regulated provided its right-sided limit exist (finite) at all right-dense points in $\mathbb{T}$ and its left-sided limits exist (finite) at all left-dense points in $\mathbb{T}$. \end{definition}
\begin{definition} A function $f:\mathbb{T}\rightarrow \mathbb{R}$ is called rd-continuous provided it is continuous at right-dense points in $\mathbb{T} $ and its left-sided limits exist (finite) at left-dense points in $\mathbb{T}$. The set of rd-continuous functions $f:\mathbb{T}\rightarrow \mathbb{R}$ is denoted by $\mathcal{C}_{rd}$. \end{definition}
\section{Main Results} \label{sec:MR}
We develop the basic tools of any fractional calculus: fractional differentiation (Section~\ref{sub:sec:FD}) and fractional integration (Section~\ref{sub:sec:FI}). Let $\mathbb{T}$ be a time scale, $t\in \mathbb{T}$, and $\delta >0$. We define the left $\delta$-neighborhood of $t$ as $\mathcal{U}^{-} :=\left] t-\delta ,t\right[ \cap \mathbb{T}$.
\subsection{Fractional Differentiation} \label{sub:sec:FD}
We begin by introducing a new notion: the fractional derivative of order $\alpha \in ]0,1]$ for functions defined on arbitrary time scales. For $\alpha = 1$ we obtain the usual delta derivative of the time-scale calculus.
\begin{definition} \label{def:fd:ts} Let $f:\mathbb{T}\rightarrow \mathbb{R}$, $t\in \mathbb{T}^{\kappa }$, and $\alpha \in ]0,1]$. For $\alpha \in ]0,1]\cap \left\{ 1/q : q \text{ is a odd number}\right\}$ (resp. $\alpha \in ]0,1] \setminus \left\{ 1/q : q\text{ is a odd number}\right\}$) we define $f^{(\alpha )}(t)$ to be the number (provided it exists) with the property that, given any $\epsilon >0$, there is a $\delta$-neighborhood $\mathcal{U}\subset \mathbb{T}$ of $t$ (resp. left $\delta$-neighborhood $\mathcal{U}^{-}\subset \mathbb{T}$ of $t$), $\delta > 0$, such that \begin{equation*} \left \vert \left[ f(\sigma (t))-f(s)\right] -f^{(\alpha )}(t)\left[ \sigma (t)-s\right] ^{\alpha }\right \vert \leq \epsilon \left \vert \sigma (t)-s\right \vert^{\alpha} \end{equation*} for all $s\in \mathcal{U}$ (resp. $s\in \mathcal{U}^{-}$). We call $f^{(\alpha )}(t)$ the fractional derivative of $f$ of order $\alpha $ at $t$. \end{definition}
Along the text $\alpha$ is a real number in the interval $]0,1]$. The next theorem provides some useful relationships concerning the fractional derivative on time scales introduced in Definition~\ref{def:fd:ts}.
\begin{theorem} \label{T1} Assume $f:\mathbb{T}\rightarrow \mathbb{R}$ and let $t\in \mathbb{T}^{\kappa }$. The following properties hold: \begin{description} \item[(i)] Let $\alpha \in ]0,1]\cap \left\{ \frac{1}{q} : q\text{ is a odd number}\right\} $. If $t$ is right-dense and if $f$ is fractional differentiable of order $\alpha$ at $t$, then $f$ is continuous at $t$.
\item[(ii)] Let $\alpha \in ]0,1] \setminus \left\{\frac{1}{q} : q\text{ is a odd number}\right\}$. If $t$ is right-dense and if $f$ is fractional differentiable of order $\alpha$ at $t$, then $f$ is left-continuous at $t$.
\item[(iii)] If $f$ is continuous at $t$ and $t$ is right-scattered, then $f$ is fractional differentiable of order $\alpha$ at $t$ with \begin{equation*} f^{(\alpha )}(t)=\frac{f^{\sigma }(t)-f(t)}{(\mu (t))^{\alpha }}. \end{equation*}
\item[(iv)] Let $\alpha \in ]0,1]\cap \left\{ \frac{1}{q} : q\text{ is a odd number}\right\} $. If $t$ is right-dense, then $f$ is fractional differentiable of order $\alpha$ at $t$ if, and only if, the limit \begin{equation*} \lim_{s\rightarrow t}\frac{f(t)-f(s)}{(t-s)^{\alpha}} \end{equation*} exists as a finite number. In this case, \begin{equation*} f^{(\alpha )}(t)=\lim_{s\rightarrow t}\frac{f(t)-f(s)}{(t-s)^{\alpha}}. \end{equation*}
\item[(v)] Let $\alpha \in ]0,1] \setminus \left\{ \frac{1}{q} : q\text{ is a odd number}\right\}$. If $t$ is right-dense, then $f$ is fractional differentiable of order $\alpha$ at $t$ if, and only if, the limit \begin{equation*} \lim_{s\rightarrow t^{-}}\frac{f(t)-f(s)}{(t-s)^{\alpha}} \end{equation*} exists as a finite number. In this case, \begin{equation*} f^{(\alpha )}(t)=\lim_{s\rightarrow t^{-}}\frac{f(t)-f(s)}{(t-s)^{\alpha }}. \end{equation*}
\item[(vi)] If $f$ is fractional differentiable of order $\alpha$ at $t$, then $f(\sigma (t))=f(t)+(\mu (t))^{\alpha }f^{(\alpha )}(t)$. \end{description} \end{theorem}
\begin{proof} $(i)$ Assume that $f$ is fractional differentiable at $t$. Then, there exists a neighborhood $\mathcal{U}$ of $t$ such that \begin{equation*} \left \vert \left[ f(\sigma (t))-f(s)\right] -f^{(\alpha )}(t)\left[ \sigma (t)-s\right] ^{\alpha }\right \vert \leq \epsilon \left \vert \sigma (t)-s\right \vert^{\alpha} \end{equation*} for $s\in \mathcal{U}$. Therefore, for all $s \in \mathcal{U} \cap \left]t-\epsilon ,t+\epsilon \right[$, \begin{multline*} \left \vert f\left( t\right) -f\left( s\right) \right \vert \leq \left\vert \left[ f^{\sigma }(t)-f(s)\right] -f^{(\alpha )}(t)\left[ \sigma (t) -s\right]^{\alpha}\right \vert\\ +\left \vert \left[ f^{\sigma }(t)-f(t)\right] -f^{(\alpha )}(t)\left[ \sigma (t)-t\right] ^{\alpha }\right \vert +\left \vert f^{(\alpha )}(t)\right \vert \left \vert \left[ \sigma (t)-s \right] ^{\alpha }-\left[ \sigma (t)-t\right] ^{\alpha }\right \vert \end{multline*} and, since $t$ is a right-dense point, \begin{equation*} \begin{split} \left \vert f\left( t\right) -f\left( s\right) \right \vert &\leq \left \vert \left[ f^{\sigma }(t)-f(s)\right] -f^{(\alpha )}(t)\left[ \sigma (t) -s\right]^{\alpha }\right \vert +\left \vert f^{(\alpha )}(t)\left[ t-s\right]^{\alpha}\right\vert \\ &\leq \epsilon \left \vert t-s\right \vert^{\alpha} +\left \vert f^{(\alpha)}(t)\right \vert \left \vert t-s\right \vert^{\alpha}\\ &\leq \epsilon ^{\alpha }\left[ \epsilon +\left\vert f^{(\alpha)}(t) \right\vert \right]. \end{split} \end{equation*} It follows the continuity of $f$ at $t$.
$(ii)$ The proof is similar to the proof of $(i)$, where instead of considering the neighborhood $\mathcal{U}$ of $t$ we consider a left neighborhood $\mathcal{U}^{-}$ of $t$.
$(iii)$ Assume that $f$ is continuous at $t$ and $t$ is right-scattered. By continuity, \begin{equation*} \lim_{s\rightarrow t}\frac{f^{\sigma }(t)-f(s)}{(\sigma (t)-s)^{\alpha }} =\frac{f^{\sigma }(t)-f(t)}{(\sigma (t)-t)^{\alpha}} =\frac{f^{\sigma}(t)-f(t)}{(\mu (t))^{\alpha }}. \end{equation*} Hence, given $\epsilon >0$ and $\alpha \in ]0,1] \cap \left\{ 1/q : q \text{ is a odd number}\right\}$, there is a neighborhood $\mathcal{U}$ of $t$ (or $\mathcal{U}^{-}$ if $\alpha \in ]0,1] \setminus \left\{1/q : q\text{ is a odd number}\right\}$) such that \begin{equation*} \left \vert \frac{f^{\sigma }(t)-f(s)}{(\sigma (t)-s)^{\alpha }} -\frac{f^{\sigma }(t)-f(t)}{(\mu (t))^{\alpha }}\right \vert \leq \epsilon \end{equation*} for all $s\in \mathcal{U}$ (resp. $\mathcal{U}^{-}$). It follows that \begin{equation*} \left \vert \left[ f^{\sigma }(t)-f(s)\right] -\frac{f^{\sigma }(t)-f(t)}{ (\mu (t))^{\alpha }}(\sigma (t)-s)^{\alpha }\right \vert \leq \epsilon
|\sigma (t)-s|^{\alpha} \end{equation*} for all $s\in \mathcal{U}$ (resp. $\mathcal{U}^{-}$). Hence, we get the desired result: \begin{equation*} f^{(\alpha )}(t)=\frac{f^{\sigma }(t)-f(t)}{(\mu (t))^{\alpha}}. \end{equation*}
$(iv)$ Assume that $f$ is fractional differentiable of order $\alpha $ at $t$ and $t$ is right-dense. Let $\epsilon > 0$ be given. Since $f$ is fractional differentiable of order $\alpha $ at $t$, there is a neighborhood $\mathcal{U}$ of $t$ such that \begin{equation*} \left \vert \lbrack f^{\sigma }(t)-f(s)]-f^{(\alpha )}(t)(\sigma
(t)-s)^{\alpha }\right \vert \leq \epsilon |\sigma (t)-s|^{\alpha} \end{equation*} for all $s\in \mathcal{U}$. Since $\sigma (t)=t$, \begin{equation*} \left \vert \lbrack f(t)-f(s)]-f^{(\alpha )}(t)(t-s)^{\alpha }\right \vert
\leq \epsilon |t-s|^{\alpha} \end{equation*} for all $s\in \mathcal{U}$. It follows that \begin{equation*} \left \vert \frac{f(t)-f(s)}{(t-s)^{\alpha }}-f^{(\alpha )}(t)\right \vert \leq \epsilon \end{equation*} for all $s\in \mathcal{U}$, $s\neq t$. Therefore, we get the desired result: \begin{equation*} f^{(\alpha )}(t)=\lim_{s\rightarrow t}\frac{f(t)-f(s)}{(t-s)^{\alpha }}. \end{equation*} Now assume that \begin{equation*} \lim_{s\rightarrow t}\frac{f(t)-f(s)}{(t-s)^{\alpha}} \end{equation*} exists and is equal to $L$ and $t$ is right-dense. Then, there exists $\mathcal{U}$ such that \begin{equation*} \left \vert \frac{f(t)-f(s)}{(t-s)^{\alpha }}-L\right \vert \leq \epsilon \end{equation*} for all $s\in \mathcal{U}$. Because $t$ is right-dense, \begin{equation*} \left \vert \frac{f^{\sigma }(t)-f(s)}{(\sigma \left( t\right) -s)^{\alpha}} -L\right \vert \leq \epsilon. \end{equation*} Therefore, \begin{equation*} \left \vert \left[ f^{\sigma }(t)-f(s)\right] -L\left( \sigma (t)-s\right)^{\alpha }\right\vert
\leq \epsilon |\sigma \left( t\right) -s|^{\alpha}, \end{equation*} which lead us to the conclusion that $f$ is fractional differentiable of order $\alpha $ at $t$ and $f^{(\alpha )}(t)=L$.
$(v)$ The proof is similar to the proof of $(iv)$, where instead of considering the neighborhood $\mathcal{U}$ of $t$ we consider a left-neighborhood $\mathcal{U}^{-}$ of $t$.
$(vi)$ If $\sigma (t)=t$, then $\mu (t)=0$ and \begin{equation*} f^{\sigma }(t))=f(t)=f(t)+(\mu (t))^{\alpha }f^{(\alpha )}(t). \end{equation*} On the other hand, if $\sigma (t)>t$, then by $(iii)$ \begin{equation*} f^{\sigma }(t)=f(t)+(\mu (t))^{\alpha }\cdot \frac{f^{\sigma }(t)-f(t)}{(\mu (t))^{\alpha }}=f(t)+(\mu (t))^{\alpha }f^{(\alpha )}(t). \end{equation*} The proof is complete. \end{proof}
\begin{remark} In a time scale $\mathbb{T}$, due to the inherited topology of the real numbers, a function $f$ is always continuous at any isolated point $t$. \end{remark}
\begin{proposition} \label{E1:i} If $f:\mathbb{T}\rightarrow \mathbb{R}$ is defined by $f(t)= c$ for all $t\in\mathbb{T}$, $c\in \mathbb{R}$, then $f^{(\alpha)}(t)\equiv 0$. \end{proposition}
\begin{proof} If $t$ is right-scattered, then, by Theorem~\ref{T1} (iii), one has $$ f^{(\alpha)}(t)=\frac{f(\sigma(t))-f(t)}{(\mu(t))^{\alpha}} =\frac{c-c}{(\mu(t))^{\alpha}}=0. $$ Assume $t$ is right-dense. Then, by Theorem~\ref{T1} (iv) and (v), it follows that $$ f^{(\alpha)}(t) = \lim_{s \rightarrow t}\frac{c-c}{(t-s)^{\alpha}} = 0. $$ This concludes the proof. \end{proof}
\begin{proposition} \label{E1:ii} If $f:\mathbb{T}\rightarrow \mathbb{R}$ is defined by $f(t)=t$ for all $t\in \mathbb{T}$, then \begin{equation*} f^{(\alpha )}(t) = \begin{cases} (\mu (t))^{1-\alpha } & \textrm{ if } \alpha \neq 1, \\ 1 & \textrm{ if } \alpha =1. \end{cases} \end{equation*} \end{proposition}
\begin{proof} From Theorem~\ref{T1} (vi) it follows that $\sigma(t) = t + (\mu(t))^{\alpha} f^{(\alpha)}(t)$, that is, $\mu(t) = (\mu(t))^{\alpha} f^{(\alpha)}(t)$. If $\mu(t) \ne 0$, then $f^{(\alpha)}(t)=(\mu(t))^{1-\alpha}$ and the desired relation is proved. Assume now that $\mu(t) = 0$, that is, $\sigma(t) = t$. In this case $t$ is right-dense and by Theorem~\ref{T1} (iv) and (v) it follows that $$ f^{(\alpha)}(t) = \lim_{s \rightarrow t}\frac{t-s}{(t-s)^{\alpha}}. $$ Therefore, if $\alpha =1$, then $f^{(\alpha )}(t)=1$; if $0<\alpha <1$, then $f^{(\alpha )}(t)=0$. The proof is complete. \end{proof}
Let us consider now the two classical cases $\mathbb{T}=\mathbb{R}$ and $\mathbb{T}= h \mathbb{Z}$, $h > 0$.
\begin{corollary} Function $f :\mathbb{R} \rightarrow \mathbb{R}$ is fractional differentiable of order $\alpha$ at point $t \in \mathbb{R}$ if, and only if, the limit $$ \lim_{s\rightarrow t}\frac{f(t)-f(s)}{(t-s)^{\alpha}} $$ exists as a finite number. In this case, \begin{equation} \label{KG:der} f^{(\alpha)}(t)=\lim_{s\rightarrow t}\frac{f(t)-f(s)}{(t-s)^{\alpha}}. \end{equation} \end{corollary}
\begin{proof} Here $\mathbb{T}=\mathbb{R}$ and all points are right-dense. The result follows from Theorem~\ref{T1} (iv) and (v). Note that if $\alpha \in ]0,1] \setminus \left\{ \frac{1}{q}:q\text{ is a odd number}\right\}$, then the limit only makes sense as a left-side limit. \end{proof}
\begin{remark} The definition \eqref{KG:der} corresponds to the well-known Kolwankar--Gangal approach to fractional calculus \cite{K:G:96,Wang:FDA12}. \end{remark}
\begin{corollary} Let $h > 0$. If $f :h\mathbb{Z} \rightarrow \mathbb{R}$, then $f$ is fractional differentiable of order $\alpha$ at $t\in h\mathbb{Z}$ with $$ f^{(\alpha)}(t) =\frac{f(t+h)-f(t)}{h^\alpha}. $$ \end{corollary}
\begin{proof} Here $\mathbb{T}=h\mathbb{Z}$ and all points are right-scattered. The result follows from Theorem~\ref{T1} (iii). \end{proof}
We now give an example using a more sophisticated time scale: the Cantor set.
\begin{example} Let $\mathbb{T}$ be the Cantor set. It is known (see Example~1.47 of \cite{BP}) that $\mathbb{T}$ does not contain any isolated point, and that $$ \sigma(t) = \begin{cases} t + \frac{1}{3^{m+1}} & \text{ if } t \in L,\\ t & \text{ if } t \in \mathbb{T} \setminus L, \end{cases} $$ where $$ L = \left\{\sum_{k=1}^{m} \frac{a_k}{3^k} + \frac{1}{3^{m+1}} : m \in \mathbb{N} \text{ and } a_k \in \{0, 2\} \text{ for all } 1 \le k \le m\right\}. $$ Thus, $$ \mu(t) = \begin{cases} \frac{1}{3^{m+1}} & \text{ if } t \in L,\\ 0 & \text{ if } t \in \mathbb{T} \setminus L. \end{cases} $$ Let $f : \mathbb{T} \rightarrow \mathbb{R}$ be continuous and $\alpha \in ]0,1]$. It follows from Theorem~\ref{T1} that the fractional derivative of order $\alpha$ of a function $f$ defined on the Cantor set is given by $$ f^{(\alpha)}(t) = \begin{cases} \left[f\left(t + \frac{1}{3^{m+1}}\right)-f(t)\right]3^{(m+1)\alpha} & \text{ if } t \in L,\\[0.3cm] \displaystyle \lim_{s \rightsquigarrow t} \frac{f(t)-f(s)}{(t-s)^\alpha} & \text{ if } t \in \mathbb{T} \setminus L, \end{cases} $$ where $\lim_{s\rightsquigarrow t} = \lim_{s \rightarrow t}$ if $\alpha = \frac{1}{q}$ with $q$ an odd number, and $\lim_{s\rightsquigarrow t} = \lim_{s \rightarrow t^{-}}$ otherwise. \end{example}
For the fractional derivative on time scales to be useful, we would like to know formulas for the derivatives of sums, products and quotients of fractional differentiable functions. This is done according to the following theorem.
\begin{theorem} \label{T2} Assume $f, g : \mathbb{T} \rightarrow \mathbb{R}$ are fractional differentiable of order $\alpha$ at $t \in \mathbb{T}^{\kappa}$. Then, \begin{description} \item[(i)] the sum $f+g:\mathbb{T}\rightarrow \mathbb{R}$ is fractional differentiable at $t$ with $(f+g)^{(\alpha)}(t)=f^{(\alpha)}(t)+g^{(\alpha)}(t)$;
\item[(ii)] for any constant $\lambda$, $\lambda f :\mathbb{T}\rightarrow \mathbb{R}$ is fractional differentiable at $t$ with $(\lambda f)^{(\alpha)}(t)=\lambda f^{(\alpha)}(t)$;
\item[(iii)] if $f$ and $g$ are continuous, then the product $f g :\mathbb{T}\rightarrow \mathbb{R}$ is fractional differentiable at $t$ with \begin{equation*} \begin{split} (fg)^{(\alpha)}(t) &=f^{(\alpha)}(t)g(t)+f(\sigma(t))g^{(\alpha)}(t)\\ &= f^{(\alpha)}(t)g(\sigma(t)) + f(t)g^{(\alpha)}(t); \end{split} \end{equation*}
\item[(iv)] if $f$ is continuous and $f(t)f(\sigma(t))\neq 0$, then $\frac{1}{f}$ is fractional differentiable at $t$ with $$ \left(\frac{1}{f}\right)^{(\alpha)}(t) = -\frac{f^{(\alpha)}(t)}{f(t)f(\sigma(t))}; $$
\item[(v)] if $f$ and $g$ are continuous and $g(t)g(\sigma(t))\neq 0$, then $\frac{f}{g}$ is fractional differentiable at $t$ with $$ \left(\frac{f}{g}\right)^{(\alpha)}(t) =\frac{f^{(\alpha)}(t)g(t)-f(t)g^{(\alpha)}(t)}{g(t)g(\sigma(t))}. $$ \end{description} \end{theorem}
\begin{proof} Let us consider that $\alpha \in ]0,1]\cap \left\{ \frac{1}{q} : q \text{ is a odd number}\right\}$. The proofs for the case $\alpha \in ]0,1] \setminus \left\{\frac{1}{q} : q \text{ is a odd number}\right \}$ are similar: one just needs to choose the proper left-sided neighborhoods. Assume that $f$ and $g$ are fractional differentiable at $t \in\mathbb{T}^{\kappa}$. $(i)$ Let $\epsilon > 0$. Then there exist neighborhoods $\mathcal{U}_{1}$ and $\mathcal{U}_{2}$ of $t$ for which \begin{equation*}
\left|f(\sigma(t))-f(s)-f^{(\alpha)}(t)[\sigma(t)-s]^{\alpha}\right|
\leq \frac{\epsilon}{2}|\sigma(t)-s|^{\alpha}~~for ~all~~s\in \mathcal{U}_{1} \end{equation*} and \begin{equation*}
\left|g(\sigma(t))-g(s)-g^{(\alpha)}(t)[\sigma(t)-s]^{\alpha}\right|
\leq \frac{\epsilon}{2}|\sigma(t)-s|^{\alpha}~~for ~all~~s\in \mathcal{U}_{2}. \end{equation*} Let $\mathcal{U}=\mathcal{U}_{1}\cap \mathcal{U}_{2}$. Then \begin{equation*} \begin{split}
\biggl|(f&+g)(\sigma(t))-(f+g)(s)-\left[f^{(\alpha)}(t)
+g^{(\alpha)}(t)\right](\sigma(t)-s)^{\alpha}\biggr|\\
&=\left|f(\sigma(t))-f(s)-f^{(\alpha)}(t)[\sigma(t)-s]^{\alpha}
+g(\sigma(t))-g(s)-g^{(\alpha)}(t)[\sigma(t)-s]^{\alpha}\right|\\
&\leq \left|f(\sigma(t))-f(s)-f^{(\alpha)}(t)[\sigma(t)-s]^{\alpha}\right|
+\left|g(\sigma(t))-g(s)-g^{(\alpha)}(t)[\sigma(t)-s]^{\alpha}\right|\\
&\leq \frac{\epsilon}{2}|\sigma(t)-s|^{\alpha}+\frac{\epsilon}{2}|\sigma(t)-s|^{\alpha}
=\epsilon |\sigma(t)-s|^{\alpha} \end{split} \end{equation*} for all $s\in \mathcal{U}$. Therefore, $f+g$ is fractional differentiable at $t$ and $(f+g)^{(\alpha)}(t)=f^{\alpha}(t)+g^{(\alpha)}(t)$. $(ii)$ Let $\epsilon > 0$. Then there exists a neighborhood $\mathcal{U}$ of $t$ with \begin{equation*}
\left|f(\sigma(t))-f(s)-f^{(\alpha)}(t)[\sigma(t)-s]^{\alpha}\right|
\leq \epsilon|\sigma(t)-s|^{\alpha} \text{ for all } s\in \mathcal{U}. \end{equation*} It follows that \begin{equation*}
\left|(\lambda f)(\sigma(t))-(\lambda f)(s)
-\lambda f^{(\alpha)}(t)[\sigma(t)-s]^{\alpha}\right|
\leq \epsilon |\lambda| \, |\sigma(t)-s|^{\alpha} \text{ for all } s \in \mathcal{U}. \end{equation*} Therefore, $\lambda f$ is fractional differentiable at $t$ and $(\lambda f)^{\alpha}=\lambda f^{(\alpha)}$ holds at $t$. $(iii)$ If $t$ is right-dense, then \begin{equation*} \begin{split} (fg)^{(\alpha )}(t) &=\lim_{s\rightarrow t}\frac{\left( fg\right) (t)-\left( fg\right) (s)}{(t-s)^{\alpha }} \\ &=\lim_{s\rightarrow t}\frac{f(t)-f(s)}{(t-s)^{\alpha }}g\left( t\right) +\lim_{s\rightarrow t}\frac{g(t)-g(s)}{(t-s)^{\alpha }}f\left( s\right)\\ &= f^{(\alpha )}(t)g(t)+g^{(\alpha )}(t)f(t) \\ &= f^{(\alpha )}(t)g(t)+f(\sigma (t))g^{(\alpha )}(t). \end{split} \end{equation*} If $t$ is right-scattered, then \begin{equation*} \begin{split} \left( fg\right)^{(\alpha )}(t) &= \frac{\left( fg\right)^{\sigma}(t) -\left( fg\right) (t)}{(\mu (t))^{\alpha }} \\ &=\frac{f^{\sigma }(t)-f(t)}{(\mu (t))^{\alpha }}g\left( t\right) +\frac{ g^{\sigma }(t)-g(t)}{(\mu (t))^{\alpha }}f^{\sigma }(t)\\ &=f^{(\alpha )}(t)g(t)+f(\sigma (t))g^{(\alpha )}(t). \end{split} \end{equation*} The other product rule formula follows by interchanging in $\left( fg\right)^{(\alpha )}(t)=f^{(\alpha )}(t)g(t)+f(\sigma (t))g^{(\alpha )}(t)$ the functions $f$ and $g$. $(iv)$ We use the fractional derivative of a constant (Proposition~\ref{E1:i}) and Theorem~\ref{T2} $(iii)$ just proved: from Proposition~\ref{E1:i} we know that $$ \left(f \cdot \frac{1}{f}\right)^{(\alpha)}(t)=(1)^{(\alpha)}(t)=0 $$ and, therefore, by (iii) $$ \left(\frac{1}{f}\right)^{(\alpha)}(t)f(\sigma(t)) +f^{(\alpha)}(t)\frac{1}{f(t)}=0. $$ Since we are assuming $f(\sigma(t))\neq 0$, \begin{equation*} \left(\frac{1}{f}\right)^{(\alpha)}(t) =-\frac{f^{(\alpha)}(t)}{f(t)f(\sigma(t))}. \end{equation*} For the quotient formula $(v)$, we use $(ii)$ and $(iv)$ to calculate \begin{equation*} \begin{split} \left(\frac{f}{g}\right)^{(\alpha)}(t)&=\left(f \cdot \frac{1}{g}\right)^{(\alpha)}(t)\\ &=f(t)\left(\frac{1}{g}\right)^{(\alpha)}(t)+f^{(\alpha)}(t)\frac{1}{g(\sigma(t))}\\ &=-f(t)\frac{g^{(\alpha)}(t)}{g(t)g(\sigma(t))}+f^{(\alpha)}(t)\frac{1}{g(\sigma(t))}\\ &=\frac{f^{(\alpha)}(t)g(t)-f(t)g^{(\alpha)}(t)}{g(t)g(\sigma(t))}. \end{split} \end{equation*} This concludes the proof. \end{proof}
The following theorem is proved in \cite{BP} for $\alpha = 1$. Here we show its validity for $\alpha \in \left] 0,1\right[$.
\begin{theorem} \label{thm:der:pf} Let $c$ be a constant, $m \in \mathbb{N}$, and $\alpha \in \left] 0,1\right[$. \begin{description} \item[(i)] If $f(t) =(t-c)^{m}$, then $$ f^{(\alpha)}(t)=(\mu(t))^{1-\alpha} \sum_{\nu = 0}^{m-1}\left(\sigma(t)-c\right)^{\nu}(t-c)^{m-1-\nu}. $$ \item[(ii)] If $g(t)=\frac{1}{(t-c)^{m}}$, then $$ g^{(\alpha)}(t)=-(\mu(t))^{1-\alpha} \sum_{\nu = 0}^{m-1}\frac{1}{(\sigma(t)-c)^{m-\nu}(t-c)^{\nu+1}}, $$ provided $(t-c)\left(\sigma(t)-c\right) \neq 0$. \end{description} \end{theorem}
\begin{proof} We prove the first formula by induction. If $ m=1$, then $f(t)=t-c$ and $f^{(\alpha)}(t)=(\mu(t))^{1-\alpha}$ holds from Propositions~\ref{E1:i} and \ref{E1:ii} and Theorem~\ref{T2} $(i)$. Now assume that $$ f^{(\alpha)}(t)=(\mu(t))^{1-\alpha} \sum_{\nu = 0}^{m-1}(\sigma(t)-c)^{\nu}(t-c)^{m-1-\nu} $$ holds for $f(t) =(t-c)^{m}$ and let $F(t)=(t-c)^{m+1}=(t-c)f(t)$. We use the product rule (Theorem~\ref{T2} $(iii)$) to obtain \begin{equation*} \begin{split} F^{(\alpha)}(t)&=(t-c)^{(\alpha)}f(\sigma(t))+f^{(\alpha)}(t)(t-c) =(\mu(t))^{1-\alpha}f(\sigma(t))+f^{(\alpha)}(t)(t-c)\\ &=(\mu(t))^{1-\alpha}(\sigma(t)-c)^{m}+(\mu(t))^{1-\alpha}(t)(t-c) \sum_{\nu = 0}^{m-1}(\sigma(t)-c)^{\nu}(t-c)^{m-1-\nu}\\ &=(\mu(t))^{1-\alpha}\left[( \sigma(t)-c)^{m} + \sum_{\nu = 0}^{m-1}(\sigma(t)-c)^{\nu}(t-c)^{m-\nu}\right]\\ &=(\mu(t))^{1-\alpha} \sum_{\nu = 0}^{m}(\sigma(t)-c)^{\nu}(t-c)^{m-\nu}. \end{split} \end{equation*} Hence, by mathematical induction, part $(i)$ holds. For $g(t)=\frac{1}{(t-c)^{m}}=\frac{1}{f(t)}$, we apply Theorem~\ref{T2} $(iv)$ to obtain \begin{equation*} \begin{split} g^{(\alpha)}(t)&=-\frac{f^{(\alpha)}(t)}{f(t)f(\sigma(t))} =-(\mu(t))^{1-\alpha}\frac{\sum_{\nu = 0}^{m-1}(\sigma(t) -c)^{\nu}(t-c)^{m-1-\nu}}{(t-c)^{m}(\sigma(t)-c)^{m}}\\ &=-(\mu(t))^{1-\alpha}\sum_{\nu = 0}^{m-1} \frac{1}{(t-c)^{\nu+1}(\sigma(t)-c)^{m-\nu}}, \end{split} \end{equation*} provided $(t-c)\left(\sigma(t)-c\right) \neq 0$. \end{proof}
Let us illustrate Theorem~\ref{thm:der:pf} in special cases.
\begin{example} \label{ex:17} Let $\alpha \in \left]0,1\right[$. \begin{description} \item[(i)] If $f(t)=t^{2}$, then $f^{(\alpha)}(t)=(\mu(t))^{1-\alpha} [\sigma(t)+t]$.
\item[(ii)] If $f(t)=t^{3}$, then $f^{(\alpha)}(t)=(\mu(t))^{1-\alpha} [t^{2}+t\sigma(t)+(\sigma(t))^{2}]$.
\item[(iii)] If $f(t)=\frac{1}{t}$, then $f^{(\alpha)}(t)= -\frac{(\mu(t))^{1-\alpha}}{t\sigma(t)}$. \end{description} \end{example}
From the results already obtained, it is not difficult to see that the fractional derivative does not satisfy a chain rule like $(f\circ g)^{(\alpha)}(t)=f^{(\alpha)}(g(t)) g^{(\alpha)}(t)$:
\begin{example} \label{ex:conterex:cr} Let $\alpha \in \left]0,1\right[$. Consider $f(t)=t^{2}$ and $g(t)=2 t$. Then, \begin{equation} \label{eq:ex:cr:1} (f\circ g)^{(\alpha)}(t) = \left(4 t^2\right)^{(\alpha)} = 4 (\mu(t))^{1-\alpha} \left(\sigma(t) + t \right) \end{equation} while \begin{equation} \label{eq:ex:cr:2} f^{(\alpha)}(g(t)) g^{(\alpha)}(t) = (\mu(2t))^{1-\alpha} \left(\sigma(2t) + 2t\right) 2 (\mu(t))^{1-\alpha} \end{equation} and, for example for $\mathbb{T}=\mathbb{Z}$, it is easy to see that $(f\circ g)^{(\alpha)}(t) \ne f^{(\alpha)}(g(t)) g^{(\alpha)}(t)$. \end{example}
Note that when $\alpha = 1$ and $\mathbb{T} = \mathbb{R}$ our derivative $f^{(\alpha)}$ reduces to the standard derivative $f'$ and, in this case, both expressions \eqref{eq:ex:cr:1} and \eqref{eq:ex:cr:2} give $8 t$, as expected. In the fractional case $\alpha \in ]0,1[$ we are able to prove the following result, valid for an arbitrary time scale $\mathbb{T}$.
\begin{theorem}[Chain rule] \label{T3} Let $\alpha \in \left]0,1\right[$. Assume $g:\mathbb{R}\rightarrow \mathbb{R}$ is continuous, $g:\mathbb{T}\rightarrow \mathbb{R}$ is fractional differentiable of order $\alpha$ at $t \in \mathbb{T}^{\kappa}$, and $f:\mathbb{R}\rightarrow \mathbb{R}$ is continuously differentiable. Then there exists $c$ in the real interval $[t,\sigma(t)]$ with \begin{equation} \label{q1} (f\circ g)^{(\alpha)}(t)=f'(g(c))g^{(\alpha)}(t). \end{equation} \end{theorem}
\begin{proof} Let $t \in \mathbb{T}^{\kappa}$. First we consider $t$ to be right-scattered. In this case $$ (f\circ g)^{(\alpha)}(t) =\frac{f(g(\sigma(t)))-f(g(t))}{(\mu(t))^{(\alpha)}}. $$ If $g(\sigma(t))= g(t)$, then we get $(f\circ g)^{(\alpha)}(t)=0$ and $g^{(\alpha)}(t)=0$. Therefore, \eqref{q1} holds for any $c$ in the real interval $[t,\sigma(t)]$ and we can assume $g(\sigma(t)) \neq g(t)$. By the mean value theorem, \begin{equation*} \begin{split} (f\circ g)^{(\alpha)}(t)&=\frac{f(g(\sigma(t)))-f(g(t))}{g(\sigma(t))-g(t)} \cdot \frac{g(\sigma(t))-g(t)}{(\mu(t))^{(\alpha)}}\\ &=f'(\xi)g^{(\alpha)}(t), \end{split} \end{equation*} where $\xi$ is between $g(t)$ and $g(\sigma(t))$. Since $g:\mathbb{R}\rightarrow \mathbb{R}$ is continuous, there is a $c\in[t,\sigma(t)]$ such that $g(c)=\xi$, which gives us the desired result. Now consider the case when $t$ is right-dense. In this case \begin{equation*} \begin{split} (f\circ g)^{(\alpha)}(t)&=\lim_{s\rightarrow t}\frac{f(g(t))-f(g(s))}{g(t)-g(s)} \cdot \frac{g(t)-g(s)}{(t-s)^{(\alpha)}}\\ &=\lim_{s\rightarrow t}\left\{f'(\xi_{s}).\frac{g(t)-g(s)}{(t-s)^{(\alpha)}}\right\} \end{split} \end{equation*} by the mean value theorem, where $\xi_{s}$ is between $g(s)$ and $g(t)$. By the continuity of $g$ we get that $\lim_{s\rightarrow t}\xi_{s}=g(t)$, which gives us the desired result. \end{proof}
\begin{example} Let $\mathbb{T}=\mathbb{Z}$, for which $\sigma(t) = t+1$ and $\mu(t) \equiv 1$, and consider the same functions of Example~\ref{ex:conterex:cr}: $f(t)=t^{2}$ and $g(t)=2t$. We can find directly the value $c$, guaranteed by Theorem~\ref{T3} in the interval $[4,\sigma(4)]=[4,5]$, so that \begin{equation} \label{eq:ex:cr:fc} (f\circ g)^{(\alpha)}(4)=f'(g(c))g^{(\alpha)}(4). \end{equation} From \eqref{eq:ex:cr:1} it follows that $(f\circ g)^{(\alpha)}(4)=36$. Because $g^{(\alpha)}(4)=2$ and $f'(g(c))=4c$, equality \eqref{eq:ex:cr:fc} simplifies to $36 = 8 c$, and so $c=\frac{9}{2}$. \end{example}
We end Section~\ref{sub:sec:FD} explaining how to compute fractional derivatives of higher-order. As usual, we define the derivative of order zero as the identity operator: $f^{(0)} = f$. \begin{definition} \label{def:hofd} Let $\beta$ be a nonnegative real number. We define the fractional derivative of $f$ of order $\beta$ by \begin{equation*} f^{(\beta)}:=\left(f^{\Delta^{N}}\right)^{(\alpha)}, \end{equation*} where $N := \lfloor \beta \rfloor$ (that is, $N$ is the integer part of $\beta$) and $\alpha:=\beta - N$. \end{definition}
Note that the $\alpha$ of Definition~\ref{def:hofd} is in the interval $[0,1]$. We illustrate Definition~\ref{def:hofd} with some examples.
\begin{example} If $f(t)=c$ for all $t\in \mathbb{T}$, $c$ a constant, then $f^{(\beta)}\equiv 0$ for any $\beta \in\mathbb{R}_0^{+}$. \end{example}
\begin{example} Let $f(t) = t^2$, $\mathbb{T} = h \mathbb{Z}$, $h > 0$, and $\beta=1.3$. Then, by Definition~\ref{def:hofd}, we have $f^{(1.3)}=\left(f^{\Delta}\right)^{(0.3)}$. It follows from $\sigma(t) = t+h$ that $f^{(1.3)}(t)=(2t + h)^{(0.3)}$. Proposition~\ref{E1:i} and Theorem~\ref{T2} (i) and (ii) allow us to write that $f^{(1.3)}(t)= 2 (t)^{(0.3)}$. We conclude from Proposition~\ref{E1:ii} with $\mu(t) \equiv h$ that $f^{(1.3)}(t) = 2 h^{0.7}$. \end{example}
\subsection{Fractional Integration} \label{sub:sec:FI}
The two major ingredients of any calculus are differentiation and integration. Now we introduce the fractional integral on time scales.
\begin{definition} \label{def:int} Assume that $f:\mathbb{T}\rightarrow \mathbb{R}$ is a regulated function. We define the indefinite fractional integral of $f$ of order $\beta$, $0 \leq \beta \leq 1$, by \begin{equation*} \int f(t)\Delta^{\beta}t := \left(\int f(t)\Delta t\right)^{(1-\beta)}, \end{equation*} where $\int f(t)\Delta t$ is the usual indefinite integral of time scales \cite{BP}. \end{definition}
\begin{remark} It follows from Definition~\ref{def:int} that $\int f(t)\Delta^{1}t = \int f(t)\Delta t$ and $\int f(t)\Delta^{0}t = f(t)$. \end{remark}
\begin{definition} \label{def:intFracCauchy} Assume $f:\mathbb{T}\rightarrow \mathbb{R}$ is a regulated function. Let $$ F^{\beta}(t)=\int f(t)\Delta^{\beta} t $$ denote the indefinite fractional integral of $f$ of order $\beta$ with $0 \leq \beta \leq 1$. We define the Cauchy fractional integral by \begin{equation*}
\int_{a}^{b}f(t)\Delta^{\beta} t := \left. F^{\beta}(t)\right|^b_a =F^{\beta}(b)-F^{\beta}(a), \quad a,b\in \mathbb{T}. \end{equation*} \end{definition}
The next theorem gives some properties of the fractional integral of order $\beta$.
\begin{theorem} \label{T4} If $a, b, c \in \mathbb{T}$, $\xi\in\mathbb{R}$, and $f,g\in \mathcal{C}_{rd}$ with $0\leq \beta\leq 1$, then \begin{description} \item[(i)] $\int_{a}^{b}[f(t)+g(t)]\Delta^{\beta} t = \int_{a}^{b}f(t)\Delta^{\beta} t + \int_{a}^{b}g(t)\Delta^{\beta} t$;
\item[(ii)] $\int_{a}^{b}(\xi f)(t)\Delta^{\beta} t = \xi \int_{a}^{b}f(t)\Delta^{\beta} t$;
\item[(iii)] $\int_{a}^{b}f(t)\Delta^{\beta} t = - \int_{b}^{a}f(t)\Delta^{\beta} t$;
\item[(iv)] $\int_{a}^{b}f(t)\Delta^{\beta} t = \int_{a}^{c}f(t)\Delta^{\beta} t + \int_{c}^{b}f(t)\Delta^{\beta} t$;
\item[(v)] $\int_{a}^{a}f(t)\Delta^{\beta} t = 0$. \end{description} \end{theorem}
\begin{proof} The equalities follow from Definition~\ref{def:int} and Definition~\ref{def:intFracCauchy}, analogous properties of the delta integral of time scales, and the properties of Section~\ref{sub:sec:FD} for the fractional derivative on time scales. $(i)$ From Definition~\ref{def:intFracCauchy} \begin{equation*} \int_{a}^{b}(f+g)(t)\Delta^{\beta} t
= \left. \int \left(f(t) + g(t)\right) \Delta^{\beta} t \right|_a^b \end{equation*} and, from Definition~\ref{def:int}, \begin{equation*} \int_{a}^{b}(f+g)(t)\Delta^{\beta} t
= \left. \left(\int \left(f(t) + g(t)\right) \Delta t\right)^{(1-\beta)} \right|_a^b. \end{equation*} It follows from the properties of the delta integral and Theorem~\ref{T2} (i) that \begin{equation*} \int_{a}^{b}(f+g)(t)\Delta^{\beta} t = \left. \left(\int f(t) \Delta t\right)^{(1-\beta)}
+ \left(\int g(t) \Delta t\right)^{(1-\beta)}\right|_a^b. \end{equation*} Using again Definition~\ref{def:int} and Definition~\ref{def:intFracCauchy}, we arrive to the intended relation: \begin{equation*} \begin{split} \int_{a}^{b}(f+g)(t)\Delta^{\beta} t
&= \left. \int f(t) \Delta^\beta t + \int g(t) \Delta^\beta t\right|_a^b\\
&= \left. F^\beta(t) + G^\beta(t)\right|_a^b = F^\beta(b) + G^\beta(b) - F^\beta(a) - G^\beta(a)\\ &= \int_{a}^{b}f(t)\Delta^{\beta} t + \int_{a}^{b}g(t)\Delta^{\beta} t. \end{split} \end{equation*} $(ii)$ From Definition~\ref{def:intFracCauchy} and Definition~\ref{def:int} one has \begin{equation*} \int_{a}^{b}(\xi f)(t)\Delta^{\beta} t
=\left. \int (\xi f)(t)\Delta^\beta t\right|_a^b
=\left. \left(\int (\xi f)(t)\Delta t\right)^{(1-\beta)}\right|_a^b. \end{equation*} It follows from the properties of the delta integral and Theorem~\ref{T2} (ii) that \begin{equation*} \int_{a}^{b}(\xi f)(t)\Delta^{\beta} t
= \left. \xi \left(\int f(t)\Delta t\right)^{(1-\beta)}\right|_a^b. \end{equation*} We conclude the proof of (ii) by using again Definition~\ref{def:int} and Definition~\ref{def:intFracCauchy}: \begin{equation*} \begin{split} \int_{a}^{b}(\xi f)(t)\Delta^{\beta} t
&= \left. \xi \int f(t)\Delta^\beta t\right|_a^b
= \left. \xi F^\beta(t)\right|_a^b = \xi\left(F^\beta(b)-F^\beta(a)\right)\\ &= \xi \int_a^b f(t) \Delta^\beta t. \end{split} \end{equation*} The last three properties are direct consequences of Definition~\ref{def:intFracCauchy}:\\ $(iii)$ \begin{equation*} \begin{split} \int_{a}^{b}f(t)\Delta^{\beta} t &= F^\beta(b) - F^\beta(a) = - \left(F^\beta(a)-F^\beta(b)\right)\\ &= -\int_{b}^{a}f(t)\Delta^{\beta} t. \end{split} \end{equation*} $(iv)$ \begin{equation*} \begin{split} \int_{a}^{b}f(t)\Delta^{\beta} t &= F^\beta(b) - F^\beta(a) = F^\beta(c) - F^\beta(a) + F^\beta(b) - F^\beta(c)\\ &=\int_{a}^{c} f(t)\Delta^{\beta} t + \int_{c}^{b} f(t)\Delta^{\beta} t. \end{split} \end{equation*} $(v)$ \begin{equation*} \int_{a}^{a}f(t)\Delta^{\beta} t = F^\beta(a) - F^\beta(a) = 0. \end{equation*} The proof is complete. \end{proof}
We end with a simple example of a discrete fractional integral.
\begin{example} Let $\mathbb{T} = \mathbb{Z}$, $ 0 \le \beta \le 1$, and $f(t) = t$. Using the fact that in this case $$ \int t \Delta t = \frac{t^2}{2} + C $$ with $C$ a constant, we have $$ \int_{1}^{10} t \, \Delta^\beta t
= \left. \int t \, \Delta^\beta t \right|_{1}^{10}
= \left. \left(\int t \, \Delta t\right)^{(1-\beta)} \right|_{1}^{10}
= \left. \left(\frac{t^2}{2} + C\right)^{(1-\beta)} \right|_{1}^{10}. $$ It follows from Example~\ref{ex:17} (i) with $\mu(t) \equiv 1$, Theorem~\ref{T2} (i) and (ii) and Proposition~\ref{E1:i} that $$ \int_{1}^{10} t \, \Delta^\beta t
= \left. \frac{1}{2} \left(2 t + 1\right) \right|_{1}^{10} = \frac{21}{2} - \frac{3}{2} = 9. $$ \end{example}
\section{Conclusion} \label{sec:Conc}
Fractional calculus, that is, the study of differentiation and integration of noninteger order, is here extended, via the recent and powerful calculus on time scales, to include, in a single theory, the discrete fractional difference calculus and the local continuous fractional differential calculus. We have only introduced some fundamental concepts and proved some basic properties, and much remains to be done in order to develop the theory here initiated: to prove concatenation properties of derivatives and integrals, to consider partial fractional operators on time scales, to introduce a suitable fractional exponential on time scales, to study boundary value problems for fractional differential equations on time scales, to investigate the usefulness of the new fractional calculus in applications to real world problems where the time scale is partially continuous and partially discrete with a time-varying graininess function, etc. We would like also to mention that it is possible to develop fractional calculi on time scales in other different directions than the one considered here. For instance, instead of following the delta approach we have adopted, one can develop a nabla \cite{Alm:Tor:JVC,naty:NA:2009}, a diamond \cite{Mal:Tor:diamond,Moz}, or a symmetric \cite{MyID:246,MyID:247} time scale fractional calculus. These and other questions will be subject of future research.
\section*{Acknowledgments}
This research was initiated while N. Benkhettou was visiting the Department of Mathematics of University of Aveiro, February and March of 2013. The hospitality of the host institution and the financial support of Sidi Bel Abbes University are here gratefully acknowledged. A. M. C. Brito da Cruz and D. F. M. Torres were supported by Portuguese funds through the \emph{Center for Research and Development in Mathematics and Applications} (CIDMA) and \emph{The Portuguese Foundation for Science and Technology} (FCT), within project PEst-OE/MAT/UI4106/2014. The authors are very grateful to three referees for valuable remarks and comments, which significantly contributed to the quality of the paper.
\small
\end{document} | arXiv |
Electrical performance of PEDOT:PSS-based textile electrodes for wearable ECG monitoring: a comparative study
Reinel Castrillón1,
Jairo J. Pérez2 &
Henry Andrade-Caicedo ORCID: orcid.org/0000-0002-5924-26672
BioMedical Engineering OnLine volume 17, Article number: 38 (2018) Cite this article
Wearable textile electrodes for the detection of biopotentials are a promising tool for the monitoring and early diagnosis of chronic diseases. We present a comparative study of the electrical characteristics of four textile electrodes manufactured from common fabrics treated with a conductive polymer, a commercial fabric, and disposable Ag/AgCl electrodes. These characteristics will allow identifying the performance of the materials when used as ECG electrodes. The electrodes were subjected to different electrical tests, and complemented with conductivity calculations and microscopic images to determine their feasibility in the detection of ECG signals.
We evaluated four electrical characteristics: contact impedance, electrode polarization, noise, and long-term performance. We analyzed PEDOT:PSS treated fabrics based on cotton, cotton–polyester, lycra and polyester; also a commercial fabric made of silver-plated nylon Shielde® Med-Tex P130, and commercial Ag/AgCl electrodes. We calculated conductivity from the surface resistance and, analyzed their surface at a microscopic level. Rwizard was used in the statistical analysis.
The results showed that textile electrodes treated with PEDOT:PSS are suitable for the detection of ECG signals. The error detecting features of the ECG signal was lower than 2% and the electrodes kept working properly after 36 h of continuous use. Even though the contact impedance and the polarization level in textile electrodes were greater than in commercial electrodes, these parameters did not affect the acquisition of the ECG signals. Fabrics conductivity calculations were consistent to the contact impedance.
The imminent population growth is a major concern for public health systems worldwide. In most countries, the capacity in hospitals result insufficient to opportunely treat patients. Traditional medicine is reactive rather than preventive based on late responses to, in many cases, predictable conditions. Furthermore, this system is deficient in covering the persistent demand, especially from cardiovascular patients, who require a continuous and frequent monitoring. According to the World Health Organization (WHO), cardiovascular diseases are listed as the globally leading cause of death with about 17.5 million deaths in 2012, accounting for 31% of deaths worldwide. Consequently, early diagnosis of these diseases becomes an essential means for their prevention and treatment [1].
Electrocardiography (ECG) is one of the most popular techniques in clinical practice [2]. Technological advances have permitted to include it in the daily life of patients. Modern biomedical systems allow the incorporation of high-performance ambulatory monitoring devices in commonly used elements such as clothing. These elements are known as wearable systems and belong to a strategic trend of technological devices that seek the improvement of the health care promotion. They have enabled continuous wearable monitoring of several physiological signals at a low cost, easily manufacturability and comfort.
A growing interest in alternative electrodes referred to as textile electrodes has been reported from different research groups [3]. The performance of textile electrodes has been evaluated in biological signals such as respiration and ECG monitoring and compared with commercial ECG-electrodes [4]. However, these textile electrodes have limitations on noise reduction, polarization, durability and long-term performance, that need to be overcome.
Although ECG monitoring systems traditionally depend on the utilization of Ag/AgCl disposable electrodes, textile electrodes offer an alternative means to register electrical cardiac readings over time, yielding equivalent diagnostic information. Ag/AgCl electrodes are suitable for short periods of time. Afterward, they will become uncomfortable due to the use of adhesives to enhance a firm attachment to the skin. They require the use of electrolytic gels that evaporates after few hours [5]. Additionally, they could eventually generate harmful skin reactions [6]. These problems make the conventional Ag/AgCl electrode unsuitable for routine and long-term ECG measurements.
Ag/AgCl electrodes have been extensively studied and tested, to the extent that the association for the advancement of medical instrumentation (AAMI) and the National Institute of American Standard (ANSI) have proposed the standard "Disposable ECG Electrodes—ANSI/AAMI EC12:2000/(R)2010" containing the performance requirements and test methods for disposable electrodes used in electrocardiography. Nevertheless, there is no similar standard for electro—conductive textile electrodes [7]. This fact leads the researchers to propose the most relevant and resourceful strategies in the characterization of textile electrodes.
Dry electrodes enable long-term monitoring that becomes relevant for specific health conditions, such as chronic diseases, fitness, and self-care. Current efforts aim to the development of ambulatory monitoring systems based on electrodes assembled from textile materials as cotton, polyester, lycra, and silver-plated nylon. Fabrics made of those materials are commonly utilized in wearable systems, since they are treated with compounds, such as electroactive polymers, carbon structures and metal substrates [8], that allow their electrical conduction and hence the detection of biological potentials. Poly(3,4-ethylenedioxythiophene)-poly(styrene sulfonate) or in simplified form, PEDOT:PSS [9, 10] are used to improve ionic conductivity, which reduces the effects of contact impedance in ECG signal acquisition.
Most of the works reported in the literature are comparative studies between new textile electrodes and commercial reference electrodes. In addition, many of them focus on testing contact impedance and noise. Pani et al. [11] treated woven cotton and polyester fabrics with highly conductive PEDOT:PSS solutions and introduced the effects on the conductivity and their affinity when using a different second dopant. Conductivity was reported for cotton = 424 mS/cm and polyeste = 575 mS/cm and compared with commercial Ag/AgCl 3M electrodes in human ECG recordings. The results showed that the conductivity can be improved by both improving the quality of the treatment of the textiles with conductive polymers and by carefully designing the electrode, i.e. the distribution of the surface, the thickness, snap fastener and conductive yarn. They indicated that there is a decrease in the mismatch in the electrochemical impedance of the skin-electrode interface when using PEDOT: PSS, therefore, conductive gels can be avoided. In this work, we studied the electrical performance of five fabrics from the point of view of polarization, long-term performance, contact impedance and noise.
Each electrode was evaluated based on four key features: contact impedance [12,13,14,15,16], polarization [17,18,19], noise level [7, 14, 20, 21] and long-term performance [6, 14, 17, 22].
Samples of the textile materials used in this study were kindly provided by the Department of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy. Fabrication process is described by Panni et al. [11]. Briefly, textile electrodes were made by treating conventional fabrics with a conductive solution of PEDOT:PSS dispersion Clevios PH 500 (Heraeus Clevios—Germany), second dopant was glycerol 33%. Woven fabrics were used, and immersed for at least 48 h at room temperature in the polymer solution. Fabrics were then taken out from the solution and drained off to remove the solution in excess. Samples were annealed, for both water and dopants to evaporate in order to avoid deterioration of the fabric mechanical properties. The conductive fabrics used in this study were: cotton, cotton–polyester (65% cotton, 35% polyester), lycra, polyester; additionally, we included a commercial silver-plated nylon fabric Shieldex®Med-Tex P130 (Statex—Germany) [23].
Figure 1 shows five electrodes manufactured following the process described by Pani et al. [11]. The fabrics were cut into pieces of 20 mm × 20 mm, which were sewn to a non-conductive synthetic leather with silver-coated yarn to obtain greater rigidity. The size of the electrodes, which is acceptable for ECG monitoring, was chosen to ensure reproducibility of the customized fabrication process. The use of a layer of rigid synthetic leather allowed to improve the contact between the electrode and the skin ensuring a uniform pressure, which is especially beneficial in the case of textile electrodes.
Physical appearance of the electrodes used in the research. Each electrode was constructed from 2 cm × 2 cm pieces of fabric sewn to a non-conductive support using silver–nylon conducting yarns. The commercial Ag/AgCl electrode is used as the standard of comparison in each of the measurements
Finally, we fixed a metallic snap fastener to the synthetic leather and interfaced them with the same conductive yarn. In such a way, the snap fastener remained at the rear of the electrode without getting in touch with the skin. Figure 2 shows a closer view of the final aspect of the electrode, its structure, and components. The snap fastener was used to connect the electrodes to the ECG leads. In the experiments, we utilized Ag/AgCl disposable electrodes ref 2228 (3M, Germany) as the reference electrode on the ECG recording arrangement. This work focuses on electrode-skin interactions, other evaluation tests to characterize the physical properties of the electrodes were not conducted.
Closer view of the final aspect of the textile electrode. Front, back and top view of a textile electrode made of lycra. On the right side, it is possible to discern the elements of the electrode: conductive fabric, synthetic leather, conductive yarn, and metallic snap fastener
Pani et al. [11] reported the values of conductivity for cotton and polyester. In this work, we calculated the conductivity for cotton–polyester and lycra as the product between thickness and surface resistance, considering the fabrics as thin films with uniform surfaces, commonly reported as two-dimensional entities. Med-Tex P130 conductivity was not calculated as per its plated surface, it is intended to obtain silver ionic release for wound care, skin disorders, skin irritations, burn victims, not for uniformed conductivity.
Figure 3 shows optical micrographs of each type of fabric where is possible to appreciate the different types of weave. Optical micrographs were acquired using a 10× objective and an upright microscope (Eclipse Ci; Nikon).
Optical micrographs of the fabrics used in the study. An upright microscope with a 10× objective was used. The scale bar corresponds to 100 µm
Our test protocol was previously approved by the Committee of Health Research Ethics of Universidad Pontificia Bolivariana (Colombia), located therein in the document R.R. N 80 17-12-2008. Data were obtained from 8 healthy, slim build individuals between the ages of 18 and 30, four for each test (two men and two women). Our interest lied in the number of repeated measurements of each type of electrode rather than in a large number of individuals. Even though the evaluations were carried out with four participants, the noise was measured in eight individuals: four individuals chosen originally for the noise test and other four resulting from the first long-term performance measurements since both tests followed the same protocol. We set the experiments in an in-paralleled electrode configuration. We replaced the disposable electrodes at every test to avoid adding a new variable into the experiments.
The volunteers were informed about the protocol to which they would be subjected. They were asked to be at rest for a period of 30 min before the test to homogenize their body conditions. Then, the area to be measured was shaved and cleaned with alcohol to improve the adhesion of the electrodes to the skin. An elastic waistband was used to attach the textile electrodes to the skin. Neither adhesives nor electrolytic gel were used.
We used an R language based platform known as Rwizard [24] to perform all the statistical analysis.
The main electronic equipment that we used in the study was:
A virtual instrument, composed by a two channels USB oscilloscope and a function generator, Handyscope HS5 (Sneek, The Netherlands).
A device to measure low voltages, currents, and power, Cassy Lab (LD DIDACTIC GmbH, Hürth, Germany).
A switching circuit which is driven by a microcontroller to measure the power terminals at different points of the circuit.
An acquisition card based on an EVM ADS1298 device, low-power, 24-bit, simultaneously sampling, eight-channel front-end for ECG and EEG applications (Texas instrument, Texas, EEUU).
A laptop to set up the electronic systems and to record the data.
Measurement strategies are described in the following sections:
Contact impedance measurements
Contact impedance refers to the impedance at the skin-electrode interface. This test intends to quantify the property of the electrode-skin contact to oppose time-varying electric current produced by the material under test. We selected single and double dispersion Cole impedance models of first and second order to represent this parameter [25, 26]. We set a variable AC source at 5 V peak to peak (\( 5~V_{pp} \)) to sweep in a range of 0.1 Hz–10 kHz. Although the spectral components of ECG signals do not exceed 150 Hz, it is strategic to measure high frequencies to tune the models [27].
We used a variation of the method reported by Xie et al. [12]. The procedure involves determining the response of the electrodes when a sinusoidal voltage source is swept in frequency. This method is effective in measuring absolute magnitude of the impedance, however, does not allow the discrimination of the resistive and reactive components. To find such components individually, we performed the procedure based on the scheme in Fig. 4a. Instead of using a multimeter, we used a digital oscilloscope to estimate the magnitude and phase components of the contact impedance.
Setup for contact impedance measurement \( (Z_{contact}) \). a Circuit to calculate combined impedance \(Z_{sum}\), it is equal to the sum of both, tissue impedance (skin impedance and subcutaneous tissue) and contact impedance due to electrode 2 (\(Z_{contact}\)). b Circuit to calculate impedance \(Z_{23}\), it corresponds to sum of tissue impedance (skin impedance and subcutaneous tissue), contact impedance due to electrode 2, and contact impedance due to electrode 3
\(V_g\) supplies AC signal (\( 5~V_{pp} \)) to the circuit through the electrode 3; \(V_{e2}\) corresponds to the voltage measured at electrode 2; the voltage \(V_r\) at the reference resistance \(R_{ref}\) is calculated with \(V_g\) as the reference, and \(V_{21}\) satisfies \(V_{e2} - V_r\). \(V_r\) and \(V_{e2}\) are measured in the phasor form (magnitude and phase). The impedance is calculated as \(Z_{sum} = Z_{contact} + Z_{SB12}\), where \( Z_{contact}\) represents contact impedance (skin/electrode) of a single electrode 1 and \(Z_{SB12}\) represents the impedance of the subcutaneous tissue between the electrodes one and two. It can also be calculated by the expression:
$$\begin{aligned} Z_{sum}=Z_{contact}+Z_{SB12}=\frac{V_{21}}{I} \end{aligned}$$
where I is the current in the circuit and can be calculated as \(I=\frac{V_r}{R_{ref}}\) (both values known). The circuit of Fig. 4b. allows determining the impedance \(Z_{12}\), which satisfies \(Z_{12}=2Z_{contact} + Z_{SB12}\). \(V_g\) supplies AC signal (\( 5~V_{pp} \)) to the circuit through the electrode 2; the voltage \(V_r\) is measured at the reference resistance \(R_{ref}\) and is obtained by using \(V_{21}=V_g - V_r\). Thus, the impedance is calculated as:
$$\begin{aligned} Z_{12}=2Z_{contact}+Z_{SB12}=\frac{V_{21}}{I} \quad\text{where}\; I=\frac{V_r}{R_{ref}} \end{aligned}$$
Finally, contact impedance is calculated as \(Z_{contact} = Z_{12}-Z_{sum}\).
Lissajous figures
Given an input signal x(t) and a phase-shifted output signal y(t) such as:
$$\begin{aligned} x(t)= \; & {} X_0sin(\omega t) \nonumber \\ y(t)= \; & {} Y_0sin(\omega t+\theta ) \end{aligned}$$
the Lissajous figure is generated when plotting \(x(t)\ \text{vs}\ y(t)\) signals. The intersection of the figure with the y axis is identified and named \(y_0\). The phase shift between the waveforms is calculated by the expression:
$$\begin{aligned} \theta =arcsin \left( \frac{y_0}{Y_0}\right) \end{aligned}$$
Cross-correlation is a measure of the similarity between two series as a function of one relative to the other. The cross correlation between a discrete input signal x(n) and an output signal y(n) is given by the expression:
$$\begin{aligned} r_{x,y}(l)=\sum _{n=-\infty }^{\infty }x(n)y(n-l) \end{aligned}$$
where l represents a shift in discrete time between signals. The aim is to find the value of l in such way that the maximum correlation between signals in obtained. From this value, the phase shift in degrees can be calculated from the equation:
$$\begin{aligned} \theta =\frac{360\cdot l\cdot F}{F_s} \end{aligned}$$
where F corresponds to frequency of the original wave and \(F_s\) to the sample frequency from which the signals were acquired.
We determined voltages in the phasor form from their waveforms at the frequencies of interest. We designed a high-pass digital filter for eliminating the DC offset (Filter Designer App, Matlab, Mathworks, Inc.), with the following parameters: FIR, cut-off frequency = 0.05 Hz, order = 2000. Filter was applied off-line to both kind of signals (acquired from textile and reference electrodes), thus delays affected equally. The magnitude was calculated by obtaining the peaks of each wave. The phase was calculated using two techniques:
In order to compare the absolute impedance values of textile and disposable electrodes, we converted each data set of curves into a single scalar value. We have used the AUC score as a comparison parameter since the behavior of the magnitude of the impedance in the magnitude–frequency plots, decreases monotically with the increase of the frequency for all the samples. The use of the AUC score as a comparison criterion for spectral curves was used previously by Sarbaz et al. [28].
We used a multifactorial ANOVA (either two-way or repeated variables) in cases where the statistical assumptions of normality and homoscedasticity were applicable. For those cases when the assumptions were not satisfied, we performed nonparametric tests, such as Kruskal–Wallis and Wilcoxon. The main factors evaluated were: the material of the textile electrode (cotton, cotton–polyester, lycra, silver-plated nylon, and polyester), and their behavior relative to the reference (textile electrode vs Ag/AgCl electrodes). The measurement scheme for this test is shown in Fig. 5.
Contact impedance measurement scheme. Measurements are performed simultaneously in both textile and Ag/AgCl electrodes. The switching circuit has been used to interchange the measurement pins in the different configurations as explained before. Control is exerted automatically from a software application
We selected three different points of each leg to perform the measurements, to take into account the local variations of the skin. The tests are performed on the legs due to the ease of locating several electrodes for simultaneous measurements. In this way the probability of presence of motion artifacts and the interference from other bioelectric signals (such as ECG and breathing signals) is reduced.
Polarization measurements
The electrode polarization is a consequence of an alteration of the charge distribution in the skin-electrode interface, and causes a baseline drift or DC potential/offset in ECG signals. Normally in practice, measuring ECG signals requires at least two electrodes connected differentially to an instrumentation amplifier that reduces the effects of common mode interference. ECG trace must be amplified and DC potential reduced. Electrochemical phenomena at the skin cause variations, as polarizations, on the skin-electrode interface, which results in interfering and modifying signals that are added to the desired ECG signal, although the characteristics of the electrodes are the same. Polarization at the skin-electrode interface was calculated by measuring a DC potential in open circuit at the terminals of a pair of electrodes attached to the skin. The potentials were registered once the patient remained one minute motionless to avoid instabilities in the skin-electrode interface, product of involuntary biomechanical movements. The skin-electrode interface is the largest source of interference due to polarization potentials. Polarization potentials are normally in the order of millivolts; however, when values exceed such order at the presence of action potential variations, the output of the amplifier is saturated, making the ECG signal difficult to extract and polarization potentials difficult to eliminate.
We set two independent acquisition channels (Cassy Lab, Label Didactics., Ltd.) to perform simultaneous measurements of DC potentials. The measurements were carried out both, in textile and reference electrodes in the same muscle group. We programmed a series of measurements by using four electrodes (two textiles electrodes and two Ag/AgCl electrodes), as reported by Rattfalt et. al. [19]. We used three different points of each leg, (sampling frequency = 10 sps), during approximately 30 min.
We calculated DC potentials for each type of electrode as the mean absolute difference between two consecutive samples. We used the standard average exchange ratio \(\bar{X}_i\) suggested by Rattfalt [18, 19].
$$\begin{aligned} &\bar{X}_i=\frac{\sum _t|X_i(t)-X_i(t+1)|}{N-1} \quad t=0,1,2,...,N-1 \nonumber \\ &\bar{X}=\frac{\sum _i\bar{X}_i}{n} \end{aligned}$$
where \(i\) denotes each particular individual; \(n\) the total number of individuals for each electrode type, and \(N\) the total number of samples. It is necessary to guarantee that the patient is motionless to avoid muscle signals product of involuntary movements.
We used the interquartile range to eliminate the outliers in each set of observations. Each measurement series became a datum representing the average behavior of the electrode through the time. A two-way ANOVA analysis was used. We selected the type of electrode as the factor, the assumptions of normality (Kolmogorov–Smirnov, p = 0.2051) and homoscedasticity were satisfied (Levene test, p = 0.1149). The measurement scheme for this test is shown in Fig. 6.
Open circuit polarization measurement scheme. Measurements were performed simultaneously in both textile and Ag/AgCl electrodes. We used a Cassy Lab sensor (Label Didactics., Ltd.) that allows to program and automatize a set of measurements from a software application at the PC
Noise measurements
These experiments intended to quantify the noise level due to external interference, biological signals different to ECG, artifacts, and measuring equipment. We performed simultaneous measurements of the textile and commercial Ag/AgCl electrodes. The experiments consisted in capturing the same 1-lead ECG using different pairs of textile electrodes. We performed the measurements using lead II, as suggested by Takamatsu [29]. The acquisition process was conducted for a period of five minutes, where textile electrodes were attached to the skin by an elastic waistband. Figure 7 depicts the location of the electrodes and the connection to the electronics acquisition card EVM ADS1298 (Texas Instruments).
ECG signals measurement scheme. Measurements were performed simultaneously in both textile and Ag/AgCl electrodes. A circuit board based on the ADS1298 chip (Texas Instrument) was configured to acquire two channels at the same time
We designed a digital filter (Filter Designer App, Matlab, Mathworks, Inc.) for removing undesired components from the power supply and their corresponding harmonics (2 stopband filters, FIR filters, stop frequencies = 60 and 120 Hz respectively, windowing method = Kaiser, \( \beta =0.5 \), order = 150, broadband = 10 Hz) and attenuating the frequency components out of the range of cardiac signals (passband filter, FIR filter, band pass = 0.05–150 Hz, windowing method = Kaiser, \( \beta =0.5 \), order = 150), as is suggested in [30]. We performed three methods to analyze the data: noise power, cross-correlation coefficient, and segmentation.
Noise power quantifies the magnitude of the signal eliminated in the filtering process. The aim of this procedure is to identify which type of electrode has greater affectation by external interferences, biological noise, artifacts of muscle movement and breathing. The process involves determining the difference between the original and the filtered signal to calculate the average power.
$$\begin{aligned} &E=|ECG_{original}-ECG_{filtered}| \nonumber \\ &\bar{P}=\frac{1}{N}\sum _{i=0}^{N-1}E^2 \end{aligned}$$
where E is the absolute value resulting from the difference between the original and filtered signals. \(\bar{P}\) is the noise power and N represents the number of samples.
The second method is the Pearson cross-correlation coefficient. Since the cardiac signals were recorded simultaneously with both, the textile and disposable electrodes, in the same area of the volunteer's body, we expected two morphologically identical signals. However, they suffered a potential drift that was removed using a digital high pass filter described above. The normalized cross-correlation provides a value that expresses the similarity of two signals in terms of morphology; therefore, low values of the cross-correlation index suggest a large effect of noise on the ECG signals recorded by different electrodes.
The third process involves the use of a segmentation algorithm, which detects and quantifies complete P–Q–R–S–T waves. It uses the continuous wavelet transform, discrete wavelet transform, and Pan and Tompkins algorithm for the classification of the ECG signal, as reported by Bustamante et al. [31]. The error rate is calculated by dividing the number of complete ECG segments registered with the experimental material, against the number of ECG segments captured simultaneously with Ag/AgCl commercial electrodes.
Long-term performance
The performance of the electrodes over time is affected by the wear of the material. We evaluated the degree of deterioration of the textile electrode quantifying its capacity to record complete ECG complexes that are morphologically similar to those recorded by Ag/AgCl electrodes. The signal acquisition process was the same as described in the noise measurement section. We evaluated each type of electrode (cotton, cotton–polyester, lycra, polyester and silver-plated nylon) for a period of 36 h on each of the four subjects. The volunteers continued with their daily lives but were asked to return to the laboratory to perform measurements spaced at 0, 1, 3, 7, 12, 24, 30 and 36 h. The measurements obtained at time 0 were added to the dataset for the noise analysis. During the entire process, we did not remove the textile electrodes from the patient's skin; nevertheless, we adjusted them against displacements on each partial measurement. Due to the duration of the experiment, we replaced the disposable electrodes on each measurement. We did not performed additional measurements like contact impedance during these tests.
Signal processing (like filtering) was the same described in the noise measurements section. As this study focuses on the performance of fabrics, no especial filtering or higher order filter was needed. The selected range of frequencies permits components to pass through and provides a high-fidelity tracing for the P–Q–R–S–T ECG wave. Consequentially, we used the segmentation algorithm, introduced above, to extract the P–Q–R–S–T complex from the ECG trace and split it into single P–Q–R–S–T waves for individual analysis. Each ECG segment from the textile electrodes was compared with the segments, captured simultaneously, from the Ag/AgCl electrodes; then, the error rate is calculated by dividing the number of complete ECG segments registered with the experimental material, against the number of ECG segments captured simultaneously with Ag/AgCl commercial electrodes. We analyzed the data through a multivariate ANOVA of repeated variables.
According to the data input given by Mestrovic (2016), the calculated conductivity value was 2.64 S/cm for the disposable commercial Ag/AgCl 3M electrodes used in this study. Two orders above from cotton–polyester and lycra: 337 and 393 mS/cm respectively [32]. It is clearly seen that lycra performs sensibly better than cotton–polyester as an evidence of the affinity of both materials to the PEDOT:PSS solution. The order of the conductivity values demonstrate that the performance of lycra is actually better that cotton–polyester to enhance ionic transfer through the skin-electrode interface. Disposable electrodes have the highest conductivity among the samples, including those reported in Pani et al. Conductivity data is validated by Fig. 12b; however, data reported by Pani does not correspond to impedance analysis. It is clearly visible that lycra and cotton–polyester electrodes have a similar performance. Conductivity of Med-Tex P130 was not calculated as it is not an isotropic material, which means it does not have uniformed conductivity, due to it is plated for silver ionic release; in fact, we presume the dispersion of the electrodes made Med-Tex P130 is high.
Figure 3 shows the characteristics of the weaves of each fabric used in the study. Cotton and cotton–polyester exhibit a similar pattern, the fabric is made with thick yarns (approximately 200 µm) intertwined at short intervals. Only a few empty spaces between yarns are appreciable. Despite their physical similarities, cotton–polyester is closer to lycra in terms of the contact impedance than cotton. Lycra presents thinner yarns than cotton (150 µm approximately), the pattern of the fabric is linear, the yarns are nearby and tight, which leaves very few empty spaces. Indeed, lycra exhibit the lowest impedance values. Nylon–silver exhibits a pattern similar to lycra, its network of intertwined yarns leaves few empty spaces; however, its contact impedance is higher. Polyester presents the thinnest yarns (approximately 100 µm); besides, they are very far apart, which causes many empty spaces. It may explain its poor performance in establishing a good skin electrode interface, which causes the highest contact impedance values (the black line in Fig. 8).
Contact impedance. a Average of the magnitude of the contact impedance of each material versus frequency. Each of the lines of the graph represents the average value of the magnitude of the contact impedance evaluated on the different test subjects. b Box and whisker plot with the value of the area under the curve (AUC) of each of the materials. The AUC is a numerical value that represents the value of the area under the curve of the impedance spectral signals. c Shadow plot of the contact impedance magnitude versus frequency: the solid line represents the average value, the shadow is the standard deviation of the values around the average. Textiles (blue), Ag/AgCl (red). d Box and whisker plot with the value of the area under the curve comparing textile versus Ag/AgCl electrodes
We recorded and plotted the data from the four experiments separately for each individual. We assessed the electrical performance of textile electrodes by analyzing impedance magnitude, polarization variability, ECG morphology deviations, and proneness to electrical noise. We used statistical tools to compare textile electrodes against Ag/AgCl commercial electrodes.
Figure 8a shows a Bode plot of the average impedance magnitude at the selected frequency range, from 0.1 to 104 Hz. The statistical analysis of the data showed that the assumptions of normality and homoscedasticity were not satisfied; thus, we used the Kruskal–Wallis test to interpret the data. Figure 8b presents a box plot with the distribution of the data. The figure was constructed from data taken from four test subjects at six test points, corresponding to 24 measurements per material simultaneously with the reference electrodes. The data were analyzed using a non-parametric Kruskal–Wallis test yielding a p = 0.8684, confidence = 95% indicating that there are no significant differences between electrode types. Figure 8c shows a shadow plot to represent the impedance average of textile materials (blue) compared to the reference Ag/AgCl electrodes (red). Shadow represents the standard deviation. Thus, it is possible to appreciate a clear difference in the impedance magnitude of the two groups. Figure 8d presents a comparison of the data distribution by groups In total, 24 samples per material (120 measurements) and the same number of measurements with the reference electrodes were used. In this case, the assumptions of normality and homoscedasticity were not satisfied either; hence, it was necessary to perform a Wilcoxon test. The test indicated significant differences between treatments (p = 2.2 × 10–16, confidence = 95% ) where the textile electrodes, generally, presented higher contact impedances compared to Ag/AgCl commercial electrodes; besides, they had higher dispersion.
We obtained minimal variability in polarization potentials along the measurements (we only noticed small changes due to muscular activation). The average polarization level was 15.4 mV, the standard average exchange ratio calculated between consecutive measures was 269.29 µV every 0.1 s. Under the same conditions, Ag/AgCl electrodes show an average polarization level of 2.54 mV (standard average exchange ratio = 163.56 µV every 0.1 s). Detailed results of this test are presented in the Table 1.
Table 1 Measurements of the average polarization potential and of the variability between consecutive samples taken at time intervals of 0.1 s registered on each of the four test subjects, with each type of textile material (bold values) compared with commercial electrodes of Ag/Agcl (cursive values)
Silver-plated nylon electrodes showed the lowest polarization potential value (p = 0.004035). Figure 9a depicts the average behavior along the time; results do not evidence significant differences among the other materials. Figure 9b presents a general comparison of the electrical behavior of textile and Ag/AgCl electrodes. Data did not meet the normality assumption, therefore we performed a Wilcoxon test. It confirmed that electrodes treated with PEDOT:PSS have higher polarization compared to Ag/AgCl.
Average polarization potential drift for different materials and series of measurements. a Box and whisker plot of the average polarization drift of different materials. b Box and whisker plot of the average polarization drift comparing textile vs Ag/AgCl electrodes
Statistical analysis showed significant differences in the behavior of the materials (\(p=0.01945\)), particularly that the lycra electrodes are less sensitive to noise than silver-plated nylon electrodes, as depicted in Fig. 10a.
Noise measurements. a Box and whisker plot of the average noise power quantified from a filtering process. b Box and whisker plot of the average noise power comparing textile vs Ag/AgCl electrodes. c Plot of the average Pearson cross-correlation coefficient calculated between ECG signals acquired simultaneously (textile vs Ag/AgCl). d Segment of an ECG signal simultaneously recorded with textile (blue) and Ag/AgCl (red) electrodes. e Average percentage of error in the detection of ECG signal segments.
Additionally, we compared the overall performance of the textile electrodes against the reference electrodes Ag/AgCl. For this purpose we performed a two-way ANOVA analysis using as factor the type of electrode (textile or disposable), see Fig. 10b. Assumptions of normality (Shapiro–Wilk p = 0.09372) and homoscedasticity (Levene p = 0.2317) were satisfied. We observed that the disposable electrodes present a significantly lower noise levels compared with textile electrodes (p = 9.17 × 10–11).
Cross-correlation analysis depicted in Fig. 10d quantifies the similitudes between the signals recorded with the electrodes under test (Fig. 10c). In general, the correlation values are higher than 80%. Besides, we observed no statistically significant differences between textile electrodes performance.
The ability of the electrodes to acquire ECG signals was tested by the segmentation algorithm. The error rate was determined and tabulated in Table 2 based on a Kruskal–Wallis test, complementary results of comparisons are presented on Fig. 10e. Such test showed no significant differences between different types of treatments (p = 0.9965). In Table 2, 92.5% of the measurements, yield that the percentage error was less than 2%, such values are within the tolerance of the algorithm (98%).
Table 2 Number of subjects in the error range versus electrode type
We performed eight measurements, lasting 5 min each, in a time interval of 36 h, we counted the number of complete P–Q–R–S waves using the segmentation algorithm. Figure 11 shows the results of each type of material.
Long-term performance: segmentation error percentage versus time. Each one of the box diagrams represents the percentage of error in the determination of ECG signals of the textile electrodes with respect to the commercial electrodes, in the different measurement time periods. Most of the graphs show that the median of the measurements is below 5%. a Cotton electrodes. b Cotton–polyester electrodes. c Lycra electrodes. d Nylon–silver electrodes. e Polyester electrodes.
There is no evidence to establish differences in the number of complete ECG signals detected during the test. In general, after 36 h, the quality of the captured ECG signals is similar to the obtained at 0 h. Except for the very last measurement with silver-plated nylon electrodes, in all cases, the average percentage error was less than 5%. Silver-plated nylon electrodes are the only ones that presented an apparent relationship between the increase in the percentage error and time. In Fig. 11 the results are graphically presented for each type of material.
We propose a single dispersion Cole impedance model for textile electrodes treated with PEDOT:PSS (Fig. 12). The impedance parameters obtained for this model were \(R_\infty =35.065\;\text{k}\Omega \), \(R_1=3.701\;\text{M}\Omega \), \(C_1=15.129\;\text{nF}\) and \(\alpha _1=0.8397\). This static model only intends to represent a simplification of the data acquired in this study. The model can be adjusted by increasing the number of tests and individuals; thus, the model can be tuned to the specific set of data. The high variability of biological systems increases the difficulty of obtaining deterministic and predictive models. Therefore, reference models become a valid alternative for preliminary examinations.
Circuit model of contact impedance for textile electrodes. a Single dispersion Cole impedance model. b Continuous cyan line fits the model, other lines are the values of impedance magnitude obtained in the research
The main results of the research are summarized in Table 3.
Table 3 Summary of the main findings of the investigation
ECG signals were correctly registered by a set of textile electrodes treated with a conductive polymer (PEDOT:PSS). Materials under test were cotton, cotton–polyester (65% cotton, 35% colyester), lycra, polyester, and MEDTEX P-130. No gel, substrate or adhesive material was used to improve the ionic conductivity. Experiments confirmed that these materials can be used in the fabrication of wearable sensors of daily use. Materials tested are suitable for applications where the use of disposable electrodes is not practical.
Fabrics in Fig. 3 contain fibers with a coating of PEDOT:PSS with no visible deterioration at microscopic level. Due to previous conductivity calculations, we determined that the combination of highly conductive solution of PEDOT:PSS with cotton, lycra, cotton–polyester, and polyester provides an acceptable surface resistance for medical applications, specially monitoring of ionic transfers. All conductivity-related properties are connected with its fibrous structure. Resistance is associated with contact resistance between neighboring yarns and numerous contact points at the crossing between yarns in each fabric [33]. Figure 3b, c show that the manner in which the yarns of lycra and cotton–polyester are arranged on the surface of the fabrics does not impact significantly their resistance, which was observed in the impedance responses. However, when they are compared to Fig. 3d, the impedance differs notably; indeed, the gaps between each yarn represent an increment on the impedance as a consequence of the material resistance. From the observations to the fibers, we identified that the anisotropy may result from the different number of warp and weft yarns per length unit. It is visible that the current does not spread in uniformly on the surface. As we did not care about the orientation of the surface interface from the fabrics, we cannot address any affirmations about the direction of current flows along the yarns and the detailed effects of the gaps between them. However, the arrangement of the yarns, which was multi-directional, show better results in the impedance analysis and conductivity calculations, we presume that the interlacing yarns conducts to higher current captures, caused by the isotropy of electroconductive properties of fabrics shown in Fig. 3b, c. Nylon–silver and polyester showed worst performance doe to their anisotropic structure.
We determined that materials treated with PEDOT:PSS presented no statistically significant differences in acquiring ECG signals. MEDTEX P-130 based electrodes only presented a better performance in polarization tests; hence, showed a slight tendency to have poorer performance in ECG signal acquisition. Lycra based textile electrodes exhibited a highly reliable behavior, represented in lower impedance mean values and lower dispersion in the different repetitions.
Notwithstanding the multiple factors that influence the skin impedance, and the high variability present in contact impedance data, even in the same individual, we confirmed that the contact impedance is higher in textile electrodes compared to commercial electrodes. One of the factors that could strongly influence these results was the effective area of textile electrodes (4 cm2), an analysis that was out of the scope of this work. Effective electrode area affects the skin-electrode interface and its impedance which actively influences the acquired ECG signal. In fact, the relationship between the effective area of the electrode and contact impedance was studied by Puurtinen et al. [20]. It is also well known that high contact impedance is balanced with high and ultra-high input impedance electronic systems [34, 35].
It is important to highlight that all the tests reported in this paper were performed with dry electrodes. We did not use any type of gel or electrolyte that help to improve the conductivity of the skin-electrode interface. This could explain the high values of contact impedance with respect to the Ag/AgCl electrodes. Actual applications where electrodes are incorporated to acquire bioelectric signals are intended to be into the wardrobe in a natural way for the end user. Wearable devices should not become an invasive element, difficult to use and manipulate.
This work aimed to elucidate which material provides advantages for the manufacture of garments that allow the continuous monitoring of electrocardiographic signals. However, since there is no significant difference between the electrical characteristics of the materials, it is relevant to conduct studies focused on the mechanical properties of textile materials. Besides, strategies should be sought to improve the fitting of the electrode with the skin, in order to improve the effective area of contact.
Previous work demonstrated that textile electrodes have a significant effect on charge transfer. This is due to the complex contact area created by the woven structure, and since the surface of the electrode is not completely parallel to the skin. Many variables inherent to textile materials contribute to the high variability of the contact impedance, for instance, the number of fibers per cross-section, fiber properties, conductive polymer adhesion to the fibers, fiber density, and hairiness [36]. Figure 3 shows the physical appearance of the surface of the different fabrics used in this study. The woven structures apparently influence the contact impedance through the contact area formed with the skin. Polyester shows the highest impedance value, which may correspond to the empty spaces that remain between the fibers. On the other hand, lycra has the lowest impedance values, which may obey a better contact area, created by a better disposition of fibers tightly arranged.
Textile electrodes treated with PEDOT:PSS have a higher extent of polarization level than conventional ones, and the MEDTEX P-130 based electrodes (mean value: 15.4 mV). The results did not show substantial changes under conditions of complete rest; however, were slightly affected by artifacts generated by muscle activation. The potential polarization effect tends to become uniform over time (variability < 0.3 mV every 0.1 s), which contributes to its elimination through the use of analog electronic circuits.
On the segmentation process, the error rates were generally under 2%. Signals taken with textile electrodes showed a greater presence of noise than (Ag/AgCl) commercial electrodes. Nonetheless, all the segments of the ECG signal were identified properly. MEDTEX P-130 electrodes are more sensitive to noise than the other textile electrodes.
Long-term performance measurements showed that after 36 h the electrodes treated with PEDOT:PSS continue to have an adequate performance, i.e. ECG signals were clearly identified. MEDTEX P-130 based electrodes showed deterioration of the ECG signal during the test, likely as a consequence of the interaction with biological fluids. There is no evidence that relates changes of the properties of the electrodes over the time; however, misreadings can be attributed to a poor contact to the skin as a result of a movement, displacement of the material or momentary disconnection during the test. Due to the duration of the test, data was only visualized off-line, after the segmentation process. The duration of the long-term performance measurements was conditioned by the availability of laboratories and the volunteers. Takamatsu et. al [29] reported reliable results after 72 h using textile electrodes with similar features.
We propose, as future work, the development of monitoring systems using wearable sensors that incorporate PEDOT:PSS treated electrodes. Investigations should focus on the behavior of textile electrodes for long periods of time, material behavior after washing processes, noise and artifacts during physical activity, and the effects of sweating on the quality of the ECG. Bearing in mind that the material is intended to be used in wearable devices, a future study is necessary that contemplates the use of dynamic tests. It is also important how the effective area, the morphology, thickness of the polymer on the fabric and other characteristics can modify the contact impedance of the skin electrodes and their effect on the quality of the acquired ECG signals.
Contact impedance, polarization, and noise level tests showed significant differences in favor of Ag/AgCl commercial electrodes. We found that fabrics treated with PEDOT:PSS such as cotton–polyester (65% cotton, 35% polyester), lycra and polyester are suitable for activities that do not involve diagnosis. Although, they have a considerable advantage over disposable electrodes, which must be replaced at least every 24 h.
Long-term performance tests demonstrated that fabrics treated with PEDOT: PSS are functional after 36 h of continued use. They allowed to acquire ECG signals as at the beginning of the test; however, electrodes constructed with silver-plated nylon showed considerable deterioration of the ECG signal after 24 h.
We measured noise level under 5% on fabrics treated with PEDOT:PSS and silver-plated nylon. Such fabrics could be used as primary sensing elements of an ECG monitoring system.
The average polarization level measured on the textiles under test was 15.4 mV. Polarization levels were constant and only affected by involuntary muscle movements performed by the individuals. Such values can be removed by DC coupling mechanisms or by digital filtering. Silver-plated nylon electrodes showed superior performance than electrodes treated with PEDOT:PSS, in comparison to the behavior of Ag/AgCl electrodes.
None of the tests yielded statistically significant evidence that permits to determine that one PEDOT:PSS treated material used in textile electrodes has a superior performance compared to others. Therefore in the development of wearable ECG signals acquisition systems, the type of material is not an aspect to be considered from the point of view of signal quality and electrical behavior. Future studies should focus on mechanical characterization of the material to obtain an adequate coupling to the skin to orientate the applications to the development of systems for athletes, people in rehabilitation or at risk of heart disease or prevention systems, generation of early warnings and promotion of self-care.
World Health Organization. Cardiovascular diseases (CVDs). 2016. http://www.who.int/mediacentre/factsheets/fs317/en/. Accessed 29 Jan 2016.
Taji B, Shirmohammadi S, Groza V, Bolic M. An ECG monitoring system using conductive fabric. In: 2013 IEEE international symposium on medical measurements and applications proceedings (MeMeA). New York: IEEE; 2013. p. 309–14.
Raj D, Ha-Brookshire JE. How do they create Superpower? An exploration of knowledge-creation processes and work environments in the wearable technology industry. Int J Fash Des Technol Educ. 2016;9(1):82–93.
Fiedler P, Biller S, Griebel S, Haueisen J. Impedance pneumography using textile electrodes. In: 2012 annual international conference of the IEEE engineering in medicine and biology society (EMBC). New York: IEEE; 2012. p. 1606–9.
Tronstad C, Johnsen GK, Grimnes S, Martinsen OG. A study on electrode gels for skin conductance measurements. Physiol Meas. 2010;31(10):1395.
Ask P, ÖDerg PA, ÖDman S, Tenland T, Skogh M. ECG electrodes: a study of electrical and mechanical long-term properties. Acta Anaesthesiologica Scandinavica. 1979;23(2):189–206.
Marozas V, Petrenas A, Daukantas S, Lukosevicius A. A comparison of conductive textile-based and silver/silver chloride gel electrodes in exercise electrocardiogram recordings. J Electrocardiol. 2011;44(2):189–94.
Carpi F, De Rossi D. Electroactive polymer-based devices for e-textiles in biomedicine. IEEE Trans Inf Technol Biomed. 2005;9(3):295–318.
Pani D, Dessi A, Gusai E, Saenz-Cogollo JF, Barabino G, Fraboni B, Bonfiglio A. Evaluation of novel textile electrodes for ECG signals monitoring based on PEDOT:PSS-treated woven fabrics. In: 2015 37th annual international conference of the IEEE engineering in medicine and biology society (EMBC); 2015. p. 3197–200.
Pani D, Dessi A, Gusai E, Saenz Cogollo JF, Barabino G, Fraboni B, Bonfiglio A. Fully textile, pedot:pss based electrodes for wearable ecg monitoring systems. IEEE Trans Biomed Eng. 2016;63(3):540–9.
Pani D, Dessi A, Saenz-Cogollo JF, Barabino G, Fraboni B, Bonfiglio A. Fully textile, PEDOT: PSS based electrodes for wearable ECG monitoring systems. IEEE Trans Biomed Eng. 2016;63(3):540–9.
Xie L, Yang G, Xu L, Seoane F, Chen Q, Zheng L. Characterization of dry biopotential electrodes. In: Proceeding of the 35th annual international conference of the IEEE engineering in medicine and biology society. IEEE Engineering in Medicine and Biology Society. Annual Conference, vol. 2013. Osaka; 2013. p. 1478–81. https://doi.org/10.1109/EMBC.2013.6609791. https://pdfs.semanticscholar.org/f2c7/16a5b367c95d05394099686205117c88c090.pdf.
Mestrovic MA, Helmer RJN, Kyratzis L, Kumar D. Preliminary study of dry knitted fabric electrodes for physiological monitoring. In: Proceeding of the 3rd international conference on intelligent sensors, sensor networks and information, 2007. ISSNIP 2007, p. 601–6. https://doi.org/10.1109/ISSNIP.2007.4496911.
Oh TI, Yoon S, Kim TE, Wi H, Kim KJ, Woo EJ, Sadleir RJ. Nanofiber web textile dry electrodes for long-term biopotential recording. IEEE Trans Biomed Circuits Syst. 2013;7(2):204–11.
Beckmann L, Neuhaus C, Medrano G, Jungbecker N, Walter M, Gries T, Leonhardt S. Characterization of textile electrodes and conductors using standardized measurement setups. Physiol Meas. 2010;31(2):233.
Chen Y, Pei W, Chen S, Wu X, Zhao S, Wang H, Chen H. Poly(3,4-ethylenedioxythiophene) (PEDOT) as interface material for improving electrochemical performance of microneedles array-based dry electrode. Sens Actuators B Chem. 2013;188:747–56.
Patterson RP. The electrical characteristics of some commercial ECG electrodes. J Electrocardiol. 1978;11(1):23–6.
Rattfält L, Lindén M, Hult P, Berglin L, Ask P. Electrical characteristics of conductive yarns and textile electrodes for medical applications. Med Biol Eng Comput. 2007;45(12):1251–7.
Rattfält L, Björefors F, Nilsson D, Wang X, Norberg P, Ask P. Properties of screen printed electrocardiography smartware electrodes investigated in an electro-chemical cell. Biomed Eng Online. 2013;12(1):64.
Puurtinen MM, Komulainen SM, Kauppinen PK, Malmivuo JAV, Hyttinen JAK. Measurement of noise and impedance of dry and wet textile electrodes, and textile electrodes with hydrogel. In: conference proceedings : 28th annual international conference of the IEEE engineering in medicine and biology society. IEEE engineering in medicine and biology society. Conference 1; 2006. p. 6012–5.
Pola T, Vanhala J. Textile electrodes in ECG measurement. 3rd international conference onIntelligent sensors, sensor networks and information, ISSNIP 2007; 2007. p. 635–9.
Baba A, Burke M. Measurement of the electrical properties of ungelled ECG electrodes. Int J Biol Biomed Eng. 2008;2(3):89–97.
GmbH SPV. Shieldex®Med-tex P130. https://goo.gl/KGK1Hj. [En línea; accedido18-Septiembre-2017]. https://goo.gl/KGK1Hj. 2013. Accessed 2 Aug 2013.
Guisande C, et al. Rwizard software. Universidad de Vigo. España. 2014. http://www.ipez.es/RWizard. Accessed 16 Nov 2017.
Freeborn TJ, Maundy B, Elwakil AS. Cole impedance extractions from the step-response of a current excited fruit sample. Comput Electron Agric. 2013;98:100–8.
Freeborn TJ. A survey of fractional-order circuit models for biology and biomedicine. IEEE J Emerg Sel Topics Circuits Syst. 2013;3(3):416–24.
Vanlerberghe F, De Volder M, de Beeck MO, Penders J, Reynaerts D, Puers R, Van Hoof C. 2-Scale topography dry electrode for biopotential measurements. In: 2011 annual international conference of the IEEE engineering in medicine and biology society, EMBC; 2011. p. 1892–5.
Sarbaz Y, Towhidkhah F, Mosavari V, Janani A, Soltanzadeh A. Separating Parkinsonian patients from normal persons using handwriting features. J Mech Med Biol. 2013;13(03):1350030.
Takamatsu S, Lonjaret T, Crisp D, Badier JM, Malliaras GG, Ismailova E. Direct patterning of organic conductors on knitted textiles for long-term electrocardiography. Sci Rep. 2015;5:1–7. https://doi.org/10.1038/srep15003.
Kligfield P, Gettes LS, Bailey JJ, Childers R, Deal BJ, Hancock EW, van Herpen G, Kors JA, Macfarlane P, Mirvis DM, Pahlm O, Rautaharju P, Wagner GS. Recommendations for the standardization and interpretation of the electrocardiogram. Part I: the electrocardiogram and Its technology a scientific statement from the American heart association electrocardiography and arrhythmias committee, council on clinical cardiology; the American college of cardiology foundation; and the heart rhythm society endorsed by. J Am Coll Cardiol. 2007;49(10):1109–27.
Bustamante Arcila C, Duque Vallejo S, Orozco-Duque A, Bustamante Osorno J. Development of a segmentation algorithm for ecg signals, simultaneously applying continuous and discrete wavelet transform. In: Image, signal processing, and artificial vision (STSIVA), 2012 XVII Symposium Of; 2012. p. 44–9. https://doi.org/10.1109/STSIVA.2012.6340555
Mestrovic M. Characterisation and biomedical application of fabric sensors a thesis submitted for fulfilment of the requirements for the degree of Master of Engineering. Ph.D. thesis, RMIT University. https://researchbank.rmit.edu.au/eserv/rmit:14607/Mestrovic.pdf. http://researchbank.rmit.edu.au/eserv/rmit:14607/Mestrovic.pdf; 2007.
Tokarska M, Frydrysiak M, Zieba J. Electrical properties of flat textile material as inhomegeneous and anisotropic structure. J Mater Sci Mater Electron. 2013;24(12):5061–8.
Gargiulo G, Bifulco P, Cesarelli M, Ruffo M, Romano M, Romano M, Calvo RA, Jin C, van Schaik A. An ultra-high input impedance ECG amplifier for long-term monitoring of athletes. Med Device (Auckland, NZ). 2010;3:1–9.
Chi YM, Maier C, Cauwenberghs G. Ultra-high input impedance, low noise integrated amplifier for noncontact biopotential sensing. IEEE J Emerg Sel Topics Circuits Syst. 2011;1(4):526–35.
Priniotakis G, Westbroek P, Van Langenhove L, Hertleer C. Electrochemical impedance spectroscopy as an objective method for characterization of textile electrodes. Trans Inst Meas Control. 2007;29(3–4):271–81.
HA-C and JJP were responsible for writing the manuscript. HA-C, JJP and RC were responsible for planning the experiments. HA-C was responsible for overall planning of the study. RC was responsible for planning and carrying out the experiments. All authors read and approved the final manuscript.
The authors gratefully acknowledge the researchers at the University of Cagliari in Italy, specially Ph.D. José Francisco Saenz for facilitating the materials used in this work, and also for the conceptual and logistical support. Likewise we are grateful to the members of the Center of Bioengineering at the Universidad Pontificia Bolivariana in Colombia, for the methodological support and project financing. Thanks to the faculties of Health Sciences and Engineering at the Universidad Católica de Oriente in Colombia for accompanying the project, facilitating the laboratories where testing were made and the measurement equipment used. Finally a special thanks to the volunteers who served as test subjects in this investigation, who endured long working hours without receiving any remuneration.
At the time of their initial briefing, all study participants were informed of the likelihood that the data would be part of a publication.
All subjects gave informed consent, and were briefed both verbally and in written form before their ear impressions were taken, in accordance with the regulations of the local ethics committee (Committee on Health Research Ethics, Universidad Pontificia Bolivariana).
Mobile Computation and Ubiquituos Research Group GIMU, Universidad Católica de Oriente, Sector 3 Cra 46-40 B-50, Rionegro, Colombia
Reinel Castrillón
Centro de Bioingeniería, Facultad de Ingeniería Eléctrica y Electrónica, Universidad Pontificia Bolivariana, Circular 1 #70-01, Medellin, 050031, Colombia
Jairo J. Pérez & Henry Andrade-Caicedo
Jairo J. Pérez
Henry Andrade-Caicedo
Correspondence to Reinel Castrillón.
Castrillón, R., Pérez, J.J. & Andrade-Caicedo, H. Electrical performance of PEDOT:PSS-based textile electrodes for wearable ECG monitoring: a comparative study. BioMed Eng OnLine 17, 38 (2018). https://doi.org/10.1186/s12938-018-0469-5
Textile electrodes
PEDOT:PSS
Electric characterization
Contact impedance | CommonCrawl |
en:stabilization
Why do we need stabilization mode
How to implement stabilization mode
Analogies between translational and rotational motion
Derivation of ratio for required angular velocity of flywheel
Python implementation
Sample of Python code using the formula:
Example of complete code of the satellite stabilization program in Python:
Satellite stabilization mode means maintaining a zero angular velocity. This mode is necessary, for example, to obtain clear images or transfer them to a ground receiving point, when the data transmission time is long and the satellite antenna is not allowed to deviate from the ground receiving point. The theory described in this lesson is also suitable for maintaining any desired angular velocity (not only zero velocity), and for such tasks as tracking a moving object.
You can change the satellite's angular velocity using flywheels, jet engines, electromagnetic coils, and gyrodyne engines. In this example we consider the control over the control moment using the flywheel. The action of this device is based on the Law of conservation of angular momentum. For example, when the flywheel engine spins in one direction, the spacecraft (SC), respectively, begins to rotate in the other direction. It happens under the action of the same unwinding moment, but directed in the opposite side in accordance with the Newton's Third Law. If, under the influence of external factors, the spacecraft begins to turn in a certain direction, it is enough to increase the rotation speed of the flywheel in the same direction. So, the unwanted rotation of the spacecraft will stop because the flywheel will "take" the rotational moment instead of the satellite. . The information about the angular velocity of the satellite will be received by use of angular velocity sensor. In this example, we consider how to calculate control commands for the flywheel from the indications of the angular velocity sensor and data on the speed of the flywheel. It is needed for the satellite to stabilize or maintain the required angular velocity
The analogue of the Law of conservation of momentum for rotational motion is the Law of conservation of angular momentum or the Law of conservation of kinetic momentum:
$\sum\limits_{i=1}^{n}{{{J}_{i}}\cdot {{\omega }_{i}}}=const \label{eq:1}$
In general, the rotational motion of a satellite is described by laws similar to thosefor translational motion. For example, for each parameter in the translational motion there is a similar parameter for the rotational motion:
Translational motion
Force $F\leftrightarrow M$ Momentum
Distance $S\leftrightarrow \alpha$ Angle
Speed $V\leftrightarrow\omega$ Angular velocity
Acceleration $a\leftrightarrow\epsilon$ Angular acceleration
Weight $m\leftrightarrow J$ Moment of inertia
The laws of motion also look similar.
Title of law
Newton's second law $F=m\cdot a$ $M=J\cdot \epsilon$
kinetic energy $E=\frac{m\cdot {{V}^{2}}}{2}$ $E=\frac{J\cdot {{\omega}^{2}}}{2}$
law of momentum conservation $\sum\limits_{i=1}^{n}{{{m}_{i}}\cdot {{V }_{i}}}=const$ $\sum\limits_{i=1}^{n}{{{J}_{i}}\cdot {{\omega }_{i}}}=const$
Let us write the law of conservation of kinetic moment of the system 'satellite + flywheel' for the moments of time "1" и "2":
${{J}_{s}}\cdot {{\omega }_{s1}}+{{J}_{m}}\cdot {{\omega }_{m1}}={{J}_{s}}\cdot {{\omega }_{s2}}+{{J}_{m}}\cdot {{\omega }_{m2}}$
The absolute speed of the flywheel, i.e. the flywheel speed in an inertial coordinate system (for example, associated with the Earth) is the sum of the satellite angular velocity and the angular velocity of the flywheel relative to the satellite, i.e. flywheel angular velocity:
${{\omega }_{mi}}={{\omega }_{si}}+{{{\omega }'}_{mi}}$
Please note: the flywheel can measure its own angular velocity relative to the satellite body or relative angular velocity.
Let usxpress the desired speed of the flywheel which must be set
${{J}_{s}}\cdot {{\omega }_{s1}}+{{J}_{m}}\cdot \left( {{\omega }_{s1}}+{{{{\omega }'}}_{m1}} \right)={{J}_{s}}\cdot {{\omega }_{s2}}+{{J}_{m}}\cdot \left( {{\omega }_{s2}}+{{{{\omega }'}}_{m2}} \right) $
$ \left( {{J}_{s}}+{{J}_{m}} \right)\left( {{\omega }_{s1}}-{{\omega }_{s2}} \right)=-{{J}_{m}}({{\omega }_{m1}}-{{\omega }_{m2}}) $
$ {{\omega }_{m2}}={{\omega }_{m1}}+\frac{{{J}_{s}}+{{J}_{m}}}{{{J}_{m}}}\left( {{\omega }_{s1}}-{{\omega }_{s2}} \right) $
Denote the relation $\frac{{{J}_{s}}+{{J}_{m}}}{{{J}_{m}}}$ as $k_d$.
Operation of the algorithm does not require the exact value of $\frac{{{J}_{s}}+{{J}_{m}}}{{{J}_{m}}}$ because the flywheel cannot instantly set the required angular velocity. Also, the precision of measurements is not absolute: the satellite's angular velocity measured with an angular velocity sensor is not accurate, since measurements always contain measurement noise. Note that measurement of the angular velocity and command issuing to the flywheel occur with some minimum time step. All these limitations lead to the fact that $k_d$ should be experimentally selected. If it does not work we build detailed computer models which take into account all the above limitations. In our case, the coefficient $k_d$ will be selected experimentally.
$ {{\omega }_{m2}}={{\omega }_{m1}}+{{k}_{d}}\left( {{\omega }_{s1}}-{{\omega }_{s2}} \right) $
The angular velocity $\omega_{s2}$ at time "2" is the target angular velocity; we denote it by $\omega_{s\_goal}$. Thus, if the satellite is supposed to maintain the angular velocity $\omega_{s\_goal}$, then knowing the current angular velocity of the satellite and the current angular velocity of the flywheel, it is possible to calculate the desired velocity of the flywheel to maintain the "rotation with constant speed" mode:
${{\omega }_{m2}}={{\omega }_{m1}}+{{k}_{d}}\left( {{\omega }_{s1}}-{{\omega }_{{s\_goal}}} \right)$
Using the rotation mode with a constant speed, it is possible to make the satellite turn at any angle if the satellite is rotated at a constant speed for a certain time. Then the time that the satellite needs to rotate at a constant speed $\omega_{s\_goal}$ to turn to the required angle $\alpha$ is determined by dividing these values:
$t=\frac{\alpha}{\omega_{{s\_goal}}}$
If the satellite is required to be stabilized, then $\omega_{s\_goal}=0$ and the expression becomes simpler:
${{\omega }_{m2}}={{\omega }_{m1}}+{{k}_{d}}\cdot {{\omega }_{s1}}$
# request for angular velocity sensor (AVS) and flywheel data
hyro_state, gx_raw, gy_raw, gz_raw = hyro_request_raw(hyr_num)
mtr_state, mtr_speed = motor_request_speed(mtr_num)
# conversion of angular velocity values in degrees/sec
gx_degs = gx_raw * 0.00875
# if AVS is set up with the z axis, then the angular velocity
# of the satellite coincides with the readings of the AVS along the z axis, otherwise
# it is necessary to change the sign: omega = - gz_degs
omega = gz_degs
mtr_new_speed = int(mtr_speed+ kd*(omega-omega_goal))
# Differential feedback coefficient.
# The coefficient is positive if the flywheel is located with the z axis up
# and AVS is also z-axis up.
# The coefficient is chosen experimentally, depending on the form
# and the mass of your satellite.
kd = 200.0
# The time step of the algorithm, sec
time_step = 0.2
# Target satellite angular velocity, degrees/sec
# For stabilization mode is equal to 0.0 degrees/sec
omega_goal = 0.0
# Flywheel number
mtr_num = 1
# Maximum allowed flywheel speed, rpm
mtr_max_speed = 7000
# Number of AVS (angular velocity sensor)
hyr_num = 1
# Functions for determining the new flywheel speed.
# New flywheel speed is made up of
# current flywheel speed and speed increments.
# Incrementing speed in proportion to angle error
# and error in angular velocity.
# mtr_speed - flywheel current angular speed, rpm
# omega - current satellite angular velocity, degrees/sec
# omega_goal - target angular velocity of the satellite, degrees/sec
# mtr_new_speed - required angular velocity of the flywheel, rpm
def motor_new_speed_PD(mtr_speed, omega, omega_goal):
mtr_new_speed = int(mtr_speed
+ kd*(omega-omega_goal)
if mtr_new_speed > mtr_max_speed:
mtr_new_speed = mtr_max_speed
elif mtr_new_speed < -mtr_max_speed:
mtr_new_speed = -mtr_max_speed
return mtr_new_speed
# The function includes all devices
# to be used in the main program.
def initialize_all():
print "Enable motor №", mtr_num
motor_turn_on(mtr_num)
sleep(1)
print "Enable angular velocity sensor №", hyr_num
hyro_turn_on(hyr_num)
# The function disables all devices
def switch_off_all():
print "Finishing..."
print "Disable angular velocity sensor №", hyr_num
hyro_turn_off(hyr_num)
motor_set_speed(mtr_num, 0)
motor_turn_off(mtr_num)
print "Finish program"
# The main function of the program in which remaining functions are called up.
def control():
initialize_all()
# Initialize flywheel status
mtr_state = 0
# Initialize the status of the AVS
hyro_state = 0
for i in range(1000):
print "i = ", i
# Аngular speed sensor (AVS) and flywheel requests.
# Processing the readings of the angular velocity sensor (AVS),
# calculation of the satellite angular velocity.
# If the error code of the AVS is 0, i.e. there is no error
if not hyro_state:
gy_degs = gy_raw * 0.00875
gz_degs = gz_raw * 0.00875
print "gx_degs =", gx_degs, \
"gy_degs =", gy_degs, "gz_degs =", gz_degs
elif hyro_state == 1:
print "Fail because of access error, check the connection"
print "Fail because of interface error, check your code"
# Processing the flywheel and setting the target angular velocity.
if not mtr_state: # if the error code is 0, i.e. no error
print "Motor_speed: ", mtr_speed
# setting of new flywheel speed
mtr_new_speed = motor_new_speed_PD(mtr_speed,omega,omega_goal)
motor_set_speed(mtr_num, mtr_new_speed)
sleep(time_step)
switch_off_all()
1. Change the program so that the satellite rotates at a constant speed.
2. Change the program so that the satellite works according to the flight timeline:
* stabilization within 10 seconds
* 180 degree rotation in 30 seconds
3. Rewrite the program in C and get it working.
en/stabilization.txt · Last modified: 2020/03/25 16:28 (external edit) | CommonCrawl |
What is the units digit of $23^{23}$?
Let's find the cycle of units digits of $3^n$, starting with $n=1$ (note that the tens digit 2 in 23 has no effect on the units digit): $3, 9, 7, 1, 3, 9, 7, 1,\ldots$ . The cycle of units digits of $23^{n}$ is 4 digits long: 3, 9, 7, 1. Thus, to find the units digit of $23^n$ for any positive $n$, we must find the remainder, $R$, when $n$ is divided by 4 ($R=1$ corresponds to the units digit 3, $R=2$ corresponds to the units digit 9, etc.) Since $23\div4=5R3$, the units digit of $23^{23}$ is $\boxed{7}$. | Math Dataset |
Issuescaret-down
Sectionscaret-down
Columnscaret-down
Collectionscaret-down
Submitcaret-down
Aboutcaret-down
Mastheadcaret-down
Issue 3.3, Summer 2021
Milestones and Millstones
Published on Oct 22, 2021DOI
10.1162/99608f92.d07b8d16
Individualized Decision-Making Under Partial Identification: Three Perspectives, Two Optimality Results, and One Paradox
by Yifan Cui
Published onOct 22, 2021
Unmeasured confounding is a threat to causal inference and gives rise to biased estimates. In this article, we consider the problem of individualized decision-making under partial identification. Firstly, we argue that when faced with unmeasured confounding, one should pursue individualized decision-making using partial identification in a comprehensive manner. We establish a formal link between individualized decision-making under partial identification and classical decision theory by considering a lower bound perspective of value/utility function. Secondly, building on this unified framework, we provide a novel minimax solution (i.e., a rule that minimizes the maximum regret for so-called opportunists) for individualized decision-making/policy assignment. Lastly, we provide an interesting paradox drawing on novel connections between two challenging domains, that is, individualized decision-making and unmeasured confounding. Although motivated by instrumental variable bounds, we emphasize that the general framework proposed in this article would in principle apply for a rich set of bounds that might be available under partial identification.
Keywords: causal inference, decision-making strategies, individualized preferences, mixed strategy, optimality, partial identification, sharpness
In the era of big data, observational studies are a treasure for both association analysis and causal inference, with the potential to improve decision-making. Depending on the set of assumptions one is willing to make, one might achieve either point, sign, or partial identification of causal effects. In particular, under partial identification, it might be inevitable to make suboptimal decisions. Policymakers caring about decision-making would face the following important question: What are optimal strategies corresponding to different risk preferences?
In this article, the author offers a unified framework that generalizes several decision-making strategies in the literature. Building on this unified framework, the author also provides a novel minimax solution (i.e., a rule that minimizes the maximum regret for so-called opportunists) for individualized decision-making and policy assignment.
1. The Power of Storytelling: Different Views Might Lead to Different Decisions
Suppose one is playing a two-armed slot machine. The rewards R−1R_{-1}R−1 and R1R_{1}R1 are the payoffs for hitting the jackpot of each arm, respectively. For simplicity, let us assume that both arms always give positive rewards (R−1,R1>0)(R_{-1},R_{1}>0)(R−1,R1>0), that is, one is guaranteed not to lose and therefore would not refrain from playing this game. However, due to some uncertainty, one does not have prior knowledge of the exact values of R−1R_{-1}R−1 and R1R_1R1. Fortunately, suppose there is a magic instrument, which can help one to identify the range of rewards.
By only providing one with the left panel of Figure 1, that is, the range of R1−R−1R_1-R_{-1}R1−R−1, most people might opt to pull arm −1-1−1. But wait a minute... where am I, and why am I looking at the left panel without knowing the real payoffs? After looking at the right panel, the decision might be changed depending on a person's risk preference.
Figure 1. A toy example on slot machines. The left panel: the possible range of R1−R−1R_1-R_{-1}R1−R−1; the right panel: the possible ranges of R−1R_{-1}R−1 and R1R_1R1, respectively.
Is there such an instrument in real life? The answer is in the affirmative. One such instrument is a so-called instrumental variable (IV). In statistics and related disciplines, an IV method is used to estimate causal relationships when randomized experiments are not feasible or when there is noncompliance in a randomized experiment. Intuitively, a valid IV induces changes in the explanatory variable but otherwise has no direct effect on the dependent variable, allowing one to uncover the causal effect of the explanatory variable on the dependent variable. Under certain IV models, one can obtain bounds for counterfactual means. So how would one pursue decision-making when faced with partial identification? The rest of the article offers a comprehensive view of individualized decision-making under partial identification as well as several novel solutions to various decision- and policy-making strategies.
An optimal decision rule provides a personalized action/treatment strategy for each participant in the population based on one's individual characteristics. A prevailing strand of work has been devoted to estimating optimal decision rules (Athey & Wager, 2021; Murphy, 2003; Murphy et al., 2001; Qian & Murphy, 2011; Robins, 2004; Zhang et al., 2012; Zhao et al., 2012, and many others); we refer to Chakraborty and Moodie (2013), Kosorok and Laber (2019), and Tsiatis et al. (2019) for an up-to-date literature review on this topic.
Recently, there has been a fast-growing literature on estimating individualized decision rules based on observational studies subject to potential unmeasured confounding (Cui & Tchetgen Tchetgen, 2021a, 2021b, 2021c; Han, 2019, 2020, 2021; Kallus et al., 2019; Kallus & Zhou, 2018; Pu & Zhang, 2021; Qiu et al., 2021a, 2021b; Yadlowsky et al., 2018; Zhang & Pu, 2021). In particular, Cui and Tchetgen Tchetgen (2021c) pointed out that one could identify treatment regimes that maximize lower bounds of the value function when one has only partial identification through an IV. Pu and Zhang (2021) further proposed an IV-optimality criterion to learn an optimal treatment regime, which essentially recommends the treatment for patients for whom the estimated conditional average treatment effect bound covers zero based on the length of the bounds, that is, based on the left panel of Figure 1. See more details in Cui and Tchetgen Tchetgen (2021a, 2021c) and Zhang and Pu (2021).
In this article, we provide a comprehensive view of individualized decision-making under partial identification through maximizing the lower bounds of the value function. This new perspective unifies various classical decision-making strategies in classical decision theory. Building on this unified framework, we also provide a novel minimax solution (for so-called opportunists who are unwilling to lose) for individualized decision-making and policy assignment. In addition, we point out that there is a mismatch between different optimality results, that is, an 'optimal' rule that attains one criterion does not necessarily attain the other. Such mismatch is a distinctive feature of individualized decision-making under partial identification, and therefore makes the concept of universal optimality for decision-making under uncertainty ill-defined. Lastly, we provide a paradox to illustrate that a non-individualized decision can conceivably lead to an outcome superior to an individualized decision under partial identification. The provided paradox also sheds light on using IV bounds as sanity check or policy improvement.
To conclude this section, we briefly introduce notation used throughout the article. Let YYY denote the outcome of interest and A∈{−1,1}A \in \{-1,1\}A∈{−1,1} be a binary action/treatment indicator. Throughout, it is assumed that larger values of YYY are more desirable. Suppose that UUU is an unmeasured confounder of the effect of AAA on YYY. Suppose also that one has observed a pretreatment binary IV Z∈{−1,1}Z \in \{-1,1\}Z∈{−1,1}. Let XXX denote a set of fully observed pre-IV covariates. Throughout, we assume the complete data are independent and identically distributed realizations of (Y,X,A,Z,U)(Y, X, A, Z, U)(Y,X,A,Z,U); thus the observed data are (Y,X,A,Z)(Y,X,A,Z)(Y,X,A,Z).
2. A Brief Review of Optimal Decision Rules with No Unmeasured Confounding
An individualized decision rule is a mapping from the covariate space to the action space {−1,1}\{-1, 1\}{−1,1}. Suppose YaY_aYa is a person's potential outcome under an intervention that sets AAA to value aaa, YD(X)Y_{{\mathcal{D}}(X)}YD(X) is the potential outcome under a hypothetical intervention that assigns AAA according to the rule D{\mathcal{D}}D, that is, YD(X)≡Y1I{D(X)=1}+Y−1I{D(X)=−1}Y_{{\mathcal{D}}(X)} \equiv Y_{1}I\{{\mathcal{D}}(X)=1\}+Y_{-1}I\{{\mathcal{D}}(X)=-1\}YD(X)≡Y1I{D(X)=1}+Y−1I{D(X)=−1}, E[YD(X)]E[Y_{{\mathcal{D}}(X)}]E[YD(X)] is the value function (Qian & Murphy, 2011), and I{⋅}I\{\cdot\}I{⋅} is the indicator function. Throughout the article, we make the following standard consistency and positivity assumptions: (1) For a given regime D{\mathcal{D}}D, Y=YD(X)Y = Y_{{\mathcal{D}}(X)}Y=YD(X) when A=D(X)A = {\mathcal{D}}(X)A=D(X) almost surely. That is, a person's observed outcome matches his/her potential outcome under a given decision rule when the realized action matches his/her potential assignment under the rule; (2) We assume that Pr(A=a∣X)>0\Pr(A = a|X) > 0Pr(A=a∣X)>0 for a=±1a = \pm 1a=±1 almost surely. That is, for any observed covariates XXX, a person has an opportunity to take either action.
We wish to identify an optimal decision rule D∗{\mathcal{D}}^*D∗ that admits the following representation, that is,
(1) D∗(X)=sign{E(Y1−Y−1∣X)>0} or D∗=argmaxDE[YD(X)].(1) \ \ \ \ \ \ \ \begin{aligned} {\mathcal{D}}^*(X) = \text{sign}\{ E(Y_1-Y_{-1}|X)>0 \} ~\text{or}~ {\mathcal{D}}^* = \arg\max_{{\mathcal{D}}} E[Y_{{\mathcal{D}}(X)}]. \end{aligned}(1) D∗(X)=sign{E(Y1−Y−1∣X)>0} or D∗=argDmaxE[YD(X)].
A significant amount of work has been devoted to estimating optimal decision rules relying on the following unconfoundedness assumption:
Assumption 1. (Unconfoundedness) Ya⊥ ⊥A∣XY_a \perp \!\!\! \perp A| XYa⊥⊥A∣X for a=±1a=\pm 1a=±1.
The assumption essentially rules out the existence of an unmeasured factor UUU that confounds the effect of AAA on YYY upon conditioning on XXX. It is straightforward to verify that under Assumption 1, one can identify the value function E[YD(X)]E[Y_{{\mathcal{D}}(X)}]E[YD(X)] for a given decision rule D{\mathcal{D}}D. Furthermore, the optimal decision rule in Equation 1 is identified from the observed data
D∗(X)=sign{C(X)>0},\begin{aligned} {\mathcal{D}}^*(X) = \text{sign}\{ {\mathcal{C}}(X)>0 \},\end{aligned}D∗(X)=sign{C(X)>0},
where C(X)=E(Y∣X,A=1)−E(Y∣X,A=−1)=E(Y1−Y−1∣X){\mathcal{C}}(X)=E(Y|X,A=1) - E(Y|X,A=-1)=E(Y_1-Y_{-1}|X)C(X)=E(Y∣X,A=1)−E(Y∣X,A=−1)=E(Y1−Y−1∣X) denotes the conditional average treatment effect (CATE). As established by Qian and Murphy (2011), learning optimal decision rules under Assumption 1 can be formulated as
D∗=argmaxDE[I{D(X)=A}YPr(A∣X)],\begin{aligned} {\mathcal{D}}^*=\arg\max_{{\mathcal{D}}} E\left[\frac{I\{{\mathcal{D}}(X)=A\}Y}{\Pr(A|X)}\right], \end{aligned}D∗=argDmaxE[Pr(A∣X)I{D(X)=A}Y],
where Pr(A∣X)\Pr(A|X)Pr(A∣X) is the probability of taking AAA given XXX. Zhang, Tsiatis, Laber, et al. (2012) proposed to directly maximize the value function over a parametrized set of functions. Rather than maximizing the above value function, Rubin and van der Laan (2012), Zhang, Tsiatis, Davidian, et al. (2012), and Zhao et al. (2012) transformed the above problem into a weighted classification problem,
argminDE{∣C(X)∣I[sign{C(X)>0}≠D(X)]}.\begin{aligned} \arg\min_{\mathcal{D}} E \{|{\mathcal{C}}(X)| I[\text{sign}\{{\mathcal{C}}(X)>0\} \neq {\mathcal{D}}(X)]\}. \end{aligned}argDminE{∣C(X)∣I[sign{C(X)>0}=D(X)]}.
The ensuing classification approach was shown to have appealing robustness properties, particularly in a randomized study where no model assumption on YYY is needed.
3. Instrumental Variable with Partial Identification
In this section, instead of relying on Assumption 1, we allow for unmeasured confounding, which might cause biased estimates of optimal decision rules. Let Yz,aY_{z,a}Yz,a denote the potential outcome had, possibly contrary to fact, a person's IV and treatment value been set to zzz and aaa, respectively. Suppose that the following assumption holds:
Assumption 2. (Latent unconfoundedness) Yz,a⊥ ⊥(Z,A)∣X,UY_{z,a} \perp \!\!\! \perp(Z, A)|X, UYz,a⊥⊥(Z,A)∣X,U for z,a=±1z,a = \pm 1z,a=±1.
This assumption essentially states that together UUU and XXX would in principle suffice to account for any confounding bias. Because UUU is not observed, we propose to account for it when a valid IV ZZZ is available that satisfies the following standard IV assumptions (Cui & Tchetgen Tchetgen, 2021c):
Assumption 3. (IV relevance) Z⊥̸ ⊥A∣XZ {\not\perp \!\!\! \perp} A|XZ⊥⊥A∣X.
Assumption 4. (Exclusion restriction) Yz,a=YaY_{z,a}=Y_aYz,a=Ya for z,a=±1z,a=\pm 1z,a=±1 almost surely.
Assumption 5. (IV independence) Z⊥ ⊥U∣XZ \perp \!\!\! \perp U |XZ⊥⊥U∣X.
Assumption 6. (IV positivity) 0<Pr(Z=1∣X)<10<\Pr\left( Z=1|X\right)<10<Pr(Z=1∣X)<1 almost surely.
Assumptions 3-5 are well-known IV conditions, while Assumption 6 is needed for nonparametric identification (Angrist et al., 1996; Greenland, 2000; Hernan & Robins, 2006; Imbens & Angrist, 1994). Assumption 3 requires that the IV is associated with the treatment conditional on XXX. Note that Assumption 3 does not rule out confounding of the ZZZ-AAA association by an unmeasured factor, however, if present, such factor must be independent of UUU. Assumption 4 states that there can be no direct causal effect of ZZZ on YYY not mediated by AAA. Assumption 5 states that the direct causal effect of ZZZ on YYY would be identified conditional on XXX if one were to intervene on A=aA=aA=a. Figure 2 provides a graphical representation of Assumptions 4 and 5.
Figure 2. A causal graph with unmeasured confounding. The bi-directed arrow between ZZZ and AAA indicates the possibility that there may be unmeasured common causes confounding their association.
While Assumptions 3-6 together do not suffice for point identification of the counterfactual mean and average treatment effect, a valid IV, even under minimal four assumptions, can partially identify the counterfactual mean and average treatment effect, that is, lower and upper bounds might be formed. Let L−1(X)\mathcal{L}_{-1}\left( X\right)L−1(X), U−1(X)\mathcal{U}_{-1}\left( X\right)U−1(X), L1(X)\mathcal{L}_{1}\left( X\right)L1(X), U1(X)\mathcal{U}_{1}\left( X\right)U1(X) denote lower and upper bounds for E(Y−1∣X)E\left( Y_{-1}|X\right)E(Y−1∣X) and E(Y1∣X)E\left( Y_{1}|X\right)E(Y1∣X); hereafter, we consider lower and upper bounds for E(Y1−Y−1∣X)E\left( Y_{1}-Y_{-1}|X\right)E(Y1−Y−1∣X) of form L(X)=L1(X)−U−1(X)\mathcal{L}\left( X\right)={\mathcal{L}}_1(X)-{\mathcal{U}}_{-1}(X)L(X)=L1(X)−U−1(X) and U(X)=U1(X)−L−1(X)\mathcal{U}\left( X\right)={\mathcal{U}}_1(X)-{\mathcal{L}}_{-1}(X)U(X)=U1(X)−L−1(X), respectively; sharp bounds for E(Y1−Y−1∣X)E\left( Y_{1}-Y_{-1}|X\right)E(Y1−Y−1∣X) in certain prominent IV models have been shown to take such a form, see for instance Robins-Manski bound (Manski, 1990; Robins, 1989), Balke-Pearl bound (Balke & Pearl, 1997), Manski-Pepper bound under a monotone IV assumption (Manski & Pepper, 2000) and many others. Here, we consider the following conditional Balke-Pearl bounds (Cui & Tchetgen Tchetgen, 2021c) for a binary outcome as our running example. Let py,a,z,xp_{y,a,z,x}py,a,z,x denote Pr(Y=y,A=a∣Z=z,X=x),\Pr(Y = y, A = a|Z = z, X = x),Pr(Y=y,A=a∣Z=z,X=x), and
Additionally, one could proceed with other partial identification assumptions and corresponding bounds. We refer to references cited in Balke and Pearl (1997) and a review paper by Swanson et al. (2018) for alternative bounds.
We conclude this section by providing multiple settings in real life where an IV is available but Assumption 1 is not likely to hold: 1) In a double-blind placebo-randomized trial in which participants are subject to noncompliance, the treatment assignment is a valid IV; 2) Another classical example is that in sequential, multiple assignment, randomized trials (SMARTs) in which patients are subject to noncompliance, the adaptive intervention is a valid IV. We note that the later proposed randomized minimax solution in Section 5.3 offers a promising strategy for this setting; 3) In social studies, a classical example is estimating the causal effect of education on earnings. Residential proximity to a college is a valid IV. We will further elaborate the third example in the next section.
4. A Real-World Example
In this section, we first consider a real-world application on the effect of education on earnings using data from the National Longitudinal Study of Young Men (Card, 1993; Okui et al., 2012; Tan, 2006; Wang et al., 2017; Wang & Tchetgen Tchetgen, 2018), which consists of 5,525 participants aged between 14 and 24 in 1966. Among them, 3,010 provided valid education and wage responses in the 1976 follow-up. Following Tan (2006) and Wang and Tchetgen Tchetgen (2018), we consider education beyond high school as a binary action/treatment (i.e., AAA). A practically relevant question is the following: Which students would be better off starting college to maximize their earnings?
In this study, there might be unmeasured confounders even after adjusting for observed covariates, for example, unobserved preferences for education levels might be an unmeasured confounder that is likely to be associated with both education and wage. We follow Card (1993), Wang et al. (2017), and Wang and Tchetgen Tchetgen (2018) and use presence of a nearby four-year college as an instrument (i.e., ZZZ). In this data set, 2,053 (68.2%) lived close to a four-year college, and 1,521 (50.5%) had education beyond high school. To illustrate the IV bounds with binary outcomes, we follow Wang et al. (2017) and Wang and Tchetgen Tchetgen (2018) to dichotomize the outcome wage (i.e., YYY) at its median, that is 5.375 dollars per hour. While we only use this as an illustrating example, we note that dichotomizing earnings might affect decision-making, and therefore in practice one might conduct a sensitivity analysis around the choice of cut-off. Following Wang and Tchetgen Tchetgen (2018), we adjust for age, race, father and mother's education levels, indicators for residence in the south and a metropolitan area and IQ scores (i.e., XXX), all measured in 1966. Among them, race, parents' education levels, and residence are included as they may affect both the IV and outcome; age is included as it is likely to modify the effect of education on earnings; and IQ scores, as a measure of underlying ability, are included as they may modify both the effect of proximity to college on education, and the effect of education on earnings.
We use random forests to estimate the probability of py,a,z,xp_{y,a,z,x}py,a,z,x (with default tuning parameters in Liaw & Wiener, 2002) and then construct estimates of Balke-Pearl bounds L−1(X)\mathcal{L}_{-1}\left( X\right)L−1(X), U−1(X)\mathcal{U}_{-1}\left( X\right)U−1(X), L1(X)\mathcal{L}_{1}\left( X\right)L1(X), U1(X)\mathcal{U}_{1}\left( X\right)U1(X), L(X)\mathcal{L}\left( X\right)L(X), U(X)\mathcal{U}\left( X\right)U(X). To streamline our presentation, we consider the subset of individuals of age 15, parents' education level 11 years, non-Black, and residence in a non-south and metropolitan area. Their IV CATE and counterfactual mean bounds L(X){\mathcal{L}}(X)L(X), U(X){\mathcal{U}}(X)U(X), L−1(X){\mathcal{L}}_{-1}(X)L−1(X), U−1(X){\mathcal{U}}_{-1}(X)U−1(X), L1(X){\mathcal{L}}_{1}(X)L1(X), U1(X){\mathcal{U}}_{1}(X)U1(X) are presented in Figure 3.
Figure 3. IV CATE and counterfactual mean bounds for two subjects with IQ scores 84.00 and 102.45, where A=1A=1A=1 and −1−1−1 refer to education beyond high school or not, respectively.
The shape of IV bounds looks similar to the slot machine example of Figure 1 given at the beginning of the article. When faced with uncertainty, what are different decision-making strategies? In the next section, we provide a new perspective of viewing optimal decision-making under partial identification beyond just looking at contrast or value function. Except for the real-world example, for pedagogical purposes, we focus on the population level of IV bounds instead of their empirical analogs throughout.
5. The Lower Bound Perspective: A Unified Criterion
In Section 5.1, we link the lower bound framework to well established decision theory from an investigator's perspective. In Section 5.2, we extend our framework to take into account individual preferences of participants. In Section 5.3, we provide a formal solution to achieve a minimax regret goal by leveraging a randomization scheme. In Section 5.4, we reveal a mismatch between deterministic/randomized minimax regret and maximin utility, and conclude that there is no universal concept of optimality for decision-making under partial identification.
5.1. A Generalization of Classical Decision Theory
In this section, we establish a formal link between individualized decision-making under partial identification and classical decision theory. The set of rules D(w(x),x){\mathcal{D}}(w(x),x)D(w(x),x) which maximize the following lower bounds of E[YD(X)]E[Y_{{\mathcal{D}}(X)}]E[YD(X)],
{EX{[1−w(X)][L(X)I{D(X)=1}+L−1(X)]+w(X)[−U(X)I{D(x)=−1}+L1(X)]}: where w(x) can depend on D(x), 0≤w(x)≤1, for any x},\begin{aligned} &\Big\{E_X \{ [1-w(X)] [\mathcal{L}\left( X\right) I\left\{ \mathcal{D}(X)=1\right\} +\mathcal{L}_{-1}\left( X\right)] + w(X) [ -\mathcal{U}\left( X\right) I\left\{ \mathcal{D}(x)=-1\right\} +\mathcal{L}_{1}\left( X\right)]\}:\\& ~~~~ \text{where}~w(x)~\text{can depend on ${\mathcal{D}}(x)$},~0\leq w(x) \leq 1,~\text{for any}~ x\Big\},\end{aligned}{EX{[1−w(X)][L(X)I{D(X)=1}+L−1(X)]+w(X)[−U(X)I{D(x)=−1}+L1(X)]}: where w(x) can depend on D(x), 0≤w(x)≤1, for any x},
is denoted by Dopt{\mathcal{D}}^{opt}Dopt. The derivation of lower bounds of E[YD(X)]E[Y_{{\mathcal{D}}(X)}]E[YD(X)] is provided in the Appendix. Hereinafter, we refer to reasoning decision-making strategy from Dopt{\mathcal{D}}^{opt}Dopt as the lower bound criterion, where, as can be seen later, w(x)w(x)w(x) reflects the investigator's preferences.
In Table 1, we provide examples of decision-making criteria that have previously appeared in classical decision theory and we connect each such criterion to a corresponding w(x)w(x)w(x). Hereafter, for a rule D{\mathcal{D}}D, we formally define utility as value function E[YD(X)]E[Y_{{\mathcal{D}}(X)}]E[YD(X)] and regret as E[YD∗(X)]−E[YD(X)]E[Y_{{\mathcal{D}}^*(X)}] - E[Y_{{\mathcal{D}}(X)}]E[YD∗(X)]−E[YD(X)]. We give the formal definition of each rule in Table 1 except that the mixed strategy is deferred to Section 5.3. In the following definitions, min\minmin or max\maxmax without an argument is taken with respective to E[YD(X)]E[Y_{{\mathcal{D}}(X)}]E[YD(X)] (recall that E[YD(X)]=E[E[Y1∣X]I{D(X)=1}+E[Y−1∣X]I{D(X)=−1}]E[Y_{{\mathcal{D}}(X)}]= E[E[Y_{1}|X]I\{{\mathcal{D}}(X)=1\}+E[Y_{-1}|X]I\{{\mathcal{D}}(X)=-1\}]E[YD(X)]=E[E[Y1∣X]I{D(X)=1}+E[Y−1∣X]I{D(X)=−1}], and E(Y−1∣X)E\left( Y_{-1}|X\right)E(Y−1∣X), E(Y1∣X)E\left( Y_{1}|X\right)E(Y1∣X) satisfy L−1(X)≤E(Y−1∣X)≤U−1(X){\mathcal{L}}_{-1}(X)\leq E\left( Y_{-1}|X\right) \leq {\mathcal{U}}_{-1}(X)L−1(X)≤E(Y−1∣X)≤U−1(X), L1(X)≤E(Y1∣X)≤U1(X){\mathcal{L}}_{1}(X)\leq E\left( Y_{1}|X\right) \leq {\mathcal{U}}_{1}(X)L1(X)≤E(Y1∣X)≤U1(X), respectively), and D{\mathcal{D}}D belongs to the set of all deterministic rules.
Maximax utility (optimist): maxDmaxE[YD(X)]\max_{{\mathcal{D}}} \max E[Y_{{\mathcal{D}}(X)}]maxDmaxE[YD(X)];
(Wald) Maximin utility (pessimist): maxDminE[YD(X)]\max_{{\mathcal{D}}} \min E[Y_{{\mathcal{D}}(X)}]maxDminE[YD(X)];
(Savage) Minimax regret (opportunist): minDmax(E[YD∗(X)]−E[YD(X)]);\ \min_{{\mathcal{D}}} \max ( E[Y_{{\mathcal{D}}^*(X)}] - E[Y_{{\mathcal{D}}(X)}] ); minDmax(E[YD∗(X)]−E[YD(X)]);
Hurwicz criterion: maxD(αmaxE[YD(X)]+(1−α)minE[YD(X)])\max_{{\mathcal{D}}} (\alpha \max E[Y_{{\mathcal{D}}(X)}]+ (1-\alpha) \min E[Y_{{\mathcal{D}}(X)}])maxD(αmaxE[YD(X)]+(1−α)minE[YD(X)]);
Healthcare decision-making: maxDE[E(Y−1∣X)+L(X)I{D(X)=1}]\max_{{\mathcal{D}}} E[E(Y_{-1}|X)+{\mathcal{L}}(X)I\{{\mathcal{D}}(X)=1\}]maxDE[E(Y−1∣X)+L(X)I{D(X)=1}].
For example, for the left panel of Figure 3, maximax utility criterion recommends A=1A=1A=1; maximin utility criterion recommends A=−1A=-1A=−1; minimax regret criterion recommends A=−1A=-1A=−1.
Notably, all criteria in Table 1 reduce to D∗{\mathcal{D}}^*D∗ under point identification. For a more complete treatment of decision-making strategies and formal axioms of rational choice, we refer to Arrow and Hurwicz (1972). Interestingly, we note that a (deterministic) minimax regret criterion coincides with Hurwicz criterion with α=1/2\alpha=1/2α=1/2 as L(X)=L1(X)−U−1(X)\mathcal{L}\left( X\right)={\mathcal{L}}_1(X)-{\mathcal{U}}_{-1}(X)L(X)=L1(X)−U−1(X) and U(X)=U1(X)−L−1(X)\mathcal{U}\left( X\right)={\mathcal{U}}_1(X)-{\mathcal{L}}_{-1}(X)U(X)=U1(X)−L−1(X).
Table 1. Different representations of w(x)w(x)w(x) for various decision-making strategies. Define P≡L(x)I{D(x)=1}+L−1(x)P\equiv \mathcal{L}\left( x\right) I\left\{ \mathcal{D}(x)=1\right\} +\mathcal{L}_{-1}\left( x\right)P≡L(x)I{D(x)=1}+L−1(x) and Q≡−U(x)I{D(x)=−1}+L1(x)Q \equiv -\mathcal{U}\left( x\right) I\left\{ \mathcal{D}(x)=-1\right\} +\mathcal{L}_{1}\left( x\right)Q≡−U(x)I{D(x)=−1}+L1(x). The arguments of xxx and D{\mathcal{D}}D in PPP and QQQ are omitted for simplicity. To streamline the presentation, we omit the case of tiebreaking.
Remark 1. While both lower bound criterion and Hurwicz criterion have an index, they are conceptually and technically different. The index w(x)w(x)w(x) being a number between 0 and 1 refers to the preference of actions; with w(x)w(x)w(x) being a weighted average of I(P<Q)I(P<Q)I(P<Q) and I(P>Q)I(P>Q)I(P>Q), the lower bound criterion balances pessimism and optimism; however, it may not be straightforward for Hurwicz criterion to balance preferences on treatments/actions.
5.2. Incorporating Individualized Preferences: Numeric / Symbolic / Stochastic Inputs
We note that the lower bound criterion also sheds light on the process of data collection for individualized decision-making. As individuals in the population of interest may ultimately exhibit different preferences for selecting optimal decisions, it may be unreasonable to assume that all participants share a common preference for evaluating optimality of an individualized decision rule under partial identification. An investigator might collect participants' risk preferences over the space of rational choices to construct an individualized decision rule. Therefore, we use the subscript rrr (a participant's observed preference) to remind ourselves that wr(x)w_r(x)wr(x) depends not only on xxx but also on an individual's risk preference, that is, r∈Rr\in \mathcal{R}r∈R determines a specific form of wr(x)w_r(x)wr(x) (see Table 1), where R\mathcal{R}R is a collection of different risk preferences. Such wr(x)w_r(x)wr(x) results in a decision rule D(wr(x),x){\mathcal{D}}(w_r(x),x)D(wr(x),x) depending on both xxx (standard individualization, e.g., in the sense of subgroup identification) and rrr (individualized risk preferences when faced uncertainty), where rrr can be collected from each individual.
Remark 2. We note that part of the elegance of this lower bound framework is that the risk preference does not come into play if there is no uncertainty about optimal decision, that is, if 0∉(L(x),U(x))0 \notin({\mathcal{L}}(x),{\mathcal{U}}(x))0∈/(L(x),U(x)), regardless what wr(x)w_r(x)wr(x) being chosen, D(wr(x),x)=D∗(x){\mathcal{D}}(w_r(x),x)={\mathcal{D}}^*(x)D(wr(x),x)=D∗(x).
Remarkably, the recorded index wr(x)w_r(x)wr(x) for each xxx could be numeric/symbolic/stochastic, that is, fall into any of the following three categories, while the participants only need to specify a category and input a number between 0 and 1 if the first two categories are chosen:
Treatment/action preferences: Input a number β\betaβ between 0 and 1 which indicates preference on treatments/actions with larger β\betaβ in favor of A=1A=1A=1. Here, wr(x)=βw_r(x)=\betawr(x)=β. In observational studies, most applied researchers upon observing 0∈(L(x),U(x))0\in ({\mathcal{L}}(x),{\mathcal{U}}(x))0∈(L(x),U(x)) would rely on standard of care (A=−1A=-1A=−1) and opt to wait for more conclusive studies, which corresponds to β=0\beta=0β=0. In a placebo-controlled study with A=−1A=-1A=−1 denoting placebo, β=0\beta = 0β=0 represents concerns about safeness/aversion of treatment.
Utility/risk preferences: Input a number β\betaβ between 0 and 1 and let symbolic input wr(x)=βI(P>Q)+[1−β]I(P<Q)w_r(x)=\beta I(P>Q) + [1-\beta] I(P<Q)wr(x)=βI(P>Q)+[1−β]I(P<Q), where β\betaβ refers to the coefficient of optimism. For instance, β=0\beta=0β=0 puts the emphasis on the worst possible outcome, and refers to risk aversion; and likewise β=1/2\beta=1/2β=1/2, 111 refer to risk neutral and risk taker, respectively.
An option for opportunists who are unwilling to lose: Render wr(x)w_r(x)wr(x) random as a Bernoulli random variable, see Section 5.3 for details.
We highlight that the proposed index wr(x)w_r(x)wr(x) unifies various concepts in artificial intelligence, economics, and statistics, which holds promise for providing a satisfactory regime for each individual through machine intelligence.
5.3. A Randomized Minimax Regret Solution for Opportunists
In this section, we consider whether an investigator/participant who happens to be an opportunist can do better in terms of protecting the worst case regret than the minimax regret approach in Table 1.
An opportunist might not put all of his or her eggs in one basket. This mixed strategy is also known as mixed portfolio in portfolio optimization. Let p(x)p(x)p(x) denote the probability of taking A=1A=1A=1 given X=xX=xX=x, by the definition of the minimax regret criterion, one essentially needs to solve the following for p(x),p(x),p(x),
minp(x)max([1−p(x)]max{U(x),0},p(x)max{−L(x),0}),\min_{p(x)} \max([1-p(x)] \max\{{\mathcal{U}}(x),0\} ,p(x) \max\{-{\mathcal{L}}(x),0\}),p(x)minmax([1−p(x)]max{U(x),0},p(x)max{−L(x),0}),
which leads to the following solution
p∗(x)={1L(x)>0,0U(x)<0,U(x)U(x)−L(x)L(x)<0<U(x).\begin{aligned} p^*(x)= \begin{cases} 1& {\mathcal{L}}(x)>0, \\ 0 & {\mathcal{U}}(x)<0, \\ \frac{{\mathcal{U}}(x)}{{\mathcal{U}}(x)-{\mathcal{L}}(x)} & {\mathcal{L}}(x)<0<{\mathcal{U}}(x). \\ \end{cases} \end{aligned}p∗(x)=⎩⎪⎪⎨⎪⎪⎧10U(x)−L(x)U(x)L(x)>0,U(x)<0,L(x)<0<U(x).
Such a choice of p∗(x)p^*(x)p∗(x) guarantees the worst case regret no more than
{0U(x)<0 or L(x)>0,−L(x)U(x)U(x)−L(x)L(x)<0<U(x).\begin{aligned} \begin{cases} 0 & {\mathcal{U}}(x)<0~\text{or}~{\mathcal{L}}(x)>0, \\ -\frac{{\mathcal{L}}(x){\mathcal{U}}(x)}{{\mathcal{U}}(x)-{\mathcal{L}}(x)} & {\mathcal{L}}(x)<0<{\mathcal{U}}(x).\\ \end{cases} \end{aligned}{0−U(x)−L(x)L(x)U(x)U(x)<0 or L(x)>0,L(x)<0<U(x).
We formalize the above result in the following theorem.
Theorem 5.1. Define the stochastic policy D~\widetilde {\mathcal{D}}D as D~(x)=1\widetilde {\mathcal{D}}(x)=1D(x)=1 with probability p∗(x)p^*(x)p∗(x), the corresponding regret is bounded by
E[YD∗(X)]−E[YD~(X)]≤E[−L(X)U(X)U(X)−L(X)I{L(X)<0<U(X)}],\begin{aligned} E[Y_{{\mathcal{D}}^*(X)}] - E[Y_{\widetilde {\mathcal{D}}(X)}] \leq E\left[ -\frac{{\mathcal{L}}(X){\mathcal{U}}(X)}{{\mathcal{U}}(X)-{\mathcal{L}}(X)} I\{{\mathcal{L}}(X)<0<{\mathcal{U}}(X)\} \right], \end{aligned}E[YD∗(X)]−E[YD(X)]≤E[−U(X)−L(X)L(X)U(X)I{L(X)<0<U(X)}],
where E[YD~(X)]=EX[ED~[EYD~[YD~(X)∣D~,X]∣X]]E[Y_{\widetilde {\mathcal{D}}(X)}] = E_X[ E_{\widetilde {\mathcal{D}}} [E_{Y_{\widetilde {\mathcal{D}}}}[Y_{\widetilde {\mathcal{D}}(X)}|\widetilde {\mathcal{D}},X] |X] ]E[YD(X)]=EX[ED[EYD[YD(X)∣D,X]∣X]].
In contrast, by only considering deterministic rules, a minimax regret approach guarantees the worst case regret for X=xX=xX=x which is no more than
min(max{U(x),0},max{−L(x),0}).\min ( \max\{{\mathcal{U}}(x),0\} ,\max\{-{\mathcal{L}}(x),0\}).min(max{U(x),0},max{−L(x),0}).
It is clear that
−L(x)U(x)U(x)−L(x)<min{−L(x),U(x)} if L(x)<0<U(x).\begin{aligned} -\frac{{\mathcal{L}}(x){\mathcal{U}}(x)}{{\mathcal{U}}(x)-{\mathcal{L}}(x)} < \min\{-{\mathcal{L}}(x),{\mathcal{U}}(x)\} ~~~~\text{if}~~~~ {\mathcal{L}}(x)<0<{\mathcal{U}}(x). \end{aligned}−U(x)−L(x)L(x)U(x)<min{−L(x),U(x)} if L(x)<0<U(x).
Therefore, the proposed mixed strategy gives a sharper minimax regret bound than Zhang and Pu (2021) and Pu and Zhang (2021), and therefore is sharper than any deterministic rules.
Remark 3. The result in this section does not necessarily rely on L(x){\mathcal{L}}(x)L(x) being defined as L1(x)−U−1(x){\mathcal{L}}_1(x) - {\mathcal{U}}_{-1}(x)L1(x)−U−1(x) and U(x){\mathcal{U}}(x)U(x) being defined as U1(x)−L−1(x){\mathcal{U}}_1(x) - {\mathcal{L}}_{-1}(x)U1(x)−L−1(x).
Remark 4. The proposed mixed strategy leads to w(x)w(x)w(x) or wr(x)w_r(x)wr(x) a Bernoulli random variable with probability p∗(x)p^*(x)p∗(x), and therefore a stochastic rule D(w(x),x){\mathcal{D}}(w(x),x)D(w(x),x) or D(wr(x),x){\mathcal{D}}(w_r(x),x)D(wr(x),x) assigning 1 with probability p∗(x)p^*(x)p∗(x). Note that wr(x)w_r(x)wr(x) being a Bernoulli random variable with parameter p(x)p(x)p(x), and wr(x)w_r(x)wr(x) being a scalar p(x)p(x)p(x) are fundamentally different: The former one provides a stochastic decision rule. In other words, participants with the same xxx can receive different recommendations; while the latter one leads to a deterministic rule. That is, all participants with the same xxx receive the same recommendation.
5.4. No Universal Optimality for Decision-Making Under Partial Identification
As can be easily seen from Table 1 as well as Section 5.3, there is a mismatch between deterministic/randomized minimax regret and maximin utility. In fact, each of the three rules corresponds to a different decision strategy. Such mismatch is a distinctive feature of partial identification.
On the one hand, it is notable that {L(x),U(x)}\{{\mathcal{L}}(x),{\mathcal{U}}(x)\}{L(x),U(x)} provides complementary information to the analyst as it might inform the analyst as to when he/she might refrain from making a decision; mainly, if such an interval includes zero so that there is no evidence in the data as to whether the action/treatment is on average beneficial or harmful for individuals with that value of xxx. One might need to conduct randomized experiments in order to draw a causal conclusion if 0∈(L(x),U(x))0\in ({\mathcal{L}}(x),{\mathcal{U}}(x))0∈(L(x),U(x)). On the other hand, the decision-making must in general be considered a game of four numbers {L1(x),L−1(x),L(x),U(x)}\{{\mathcal{L}}_1(x),{\mathcal{L}}_{-1}(x),{\mathcal{L}}(x), {\mathcal{U}}(x) \}{L1(x),L−1(x),L(x),U(x)} rather than two, for example, {L1(x),L−1(x)}\{{\mathcal{L}}_1(x),{\mathcal{L}}_{-1}(x)\}{L1(x),L−1(x)} or {L(x),U(x)}\{{\mathcal{L}}(x),{\mathcal{U}}(x)\}{L(x),U(x)}.
From the above point of view, the concept of optimality of a decision rule under partial identification cannot be absolute, rather, it is relative to a particular choice of decision-making criterion, whether it is minimax, maximax, maximin, and so on. Furthermore, an individualized decision rule might incorporate participants' risk preferences as it might be unreasonable to assume everyone shares a common preference. In the Appendix, we provide expressions for the minimum utility, maximum regret, and maximum misclassification rate of certain 'optimal' rules in Table 1 (including maximin utility and deterministic/randomized minimax regret rules) for practical uses.
6. A Paradox: 1+1<2
In this section, we provide an interesting paradox regarding the use of partial identification to conduct individualized decision-making. To streamline our presentation, we use (deterministic) minimax regret rule as a running example, however, any rule D∈Dopt{\mathcal{D}}\in {\mathcal{D}}^{opt}D∈Dopt can suffer the same paradox. To simplify exposition, we consider the case with no UUU, that unbeknownst to the analyst, unmeasured confounding is absent. We consider the following model with covariate XXX (e.g., female/male) distributed on {0,1}\{0, 1\}{0,1} with equal probabilities,
Pr(Y=1∣X,A)=X/16+1/5A+1/15,Pr(A=1∣X,Z)=X/16+2/5Z+1/2,Z∼Bernoulli(1/2).\begin{aligned} \Pr(Y=1|X,A) &= X/16 + 1/5A + 1/15,\\ \Pr(A=1|X,Z) &= X/16 + 2/5Z + 1/2,\\ Z & \sim \text{Bernoulli}(1/2).\end{aligned}Pr(Y=1∣X,A)Pr(A=1∣X,Z)Z=X/16+1/5A+1/15,=X/16+2/5Z+1/2,∼Bernoulli(1/2).
With a slight abuse of notation, we use 0,10,10,1 coding for Z,AZ,AZ,A here. It is easy to see that the optimal rule is D∗=1{\mathcal{D}}^*=1D∗=1 for the entire population. After a simple calculation, the Balke-Pearl conditional average treatment effect bounds for X=0,1X=0,1X=0,1 both contain zero with ∣L(0)∣<∣U(0)∣|{\mathcal{L}}(0)|<|{\mathcal{U}}(0)|∣L(0)∣<∣U(0)∣ and ∣L(1)∣>∣U(1)∣|{\mathcal{L}}(1)|>|{\mathcal{U}}(1)|∣L(1)∣>∣U(1)∣. The Balke-Pearl average treatment effect bounds marginalizing over XXX also contain zero and ∣L∣<∣U∣|{\mathcal{L}}|<|{\mathcal{U}}|∣L∣<∣U∣.
As it is unbeknownst to the analyst whether unmeasured confounding is present or whether XXX is an effect modifier, there are several possible strategies for analyzing the data.
If one is concerned about individualized decision-making but does not worry about unmeasured confounding, one runs a standard regression type analysis and gets the right answer.
If one is concerned about unmeasured confounding but is only interested in decision-making based on the population level (i.e., based on average treatment effect analysis), one can obtain IV bounds on the average treatment effect and also get the right answer.
If one is concerned about individualized decision-making and also worries about unmeasured confounding, one gets the wrong answer for a subgroup.
We summarize results of the above strategies of analyses in Table 2.
Table 2. Correct/incorrect decisions using three types of data analyses.
X=0X=0X=0
√\surd√
×\times×
As can be seen from the table, mixing up two very difficult domains (individualized recommendation + unmeasured confounding) might make life harder (1 + 1 < 2). There are several lessons one can learn from this paradox:
a) A comparison between (1) and (3): It would be a good idea to first conduct a standard analysis (e.g., assume Assumption 1) or other point identification approaches (e.g., assume Assumption 7 of Cui & Tchetgen Tchetgen, 2021c) and then use IV bounds as a sanity check or say policy improvement;
b) A comparison between (2) and (3): The paradox sheds light on the clear need for carefully distinguish variables used to make individualized decisions from variables used to address confounding concerns; similar to but different from Simpson's paradox, the aggregated and disaggregated answers can be opposite for a substantial subgroup.
c) (3) by itself: It might be a rather risky undertaking to narrow down an interval estimate to a definite decision given the overwhelming uncertainty; overly accounting for unmeasured confounding might erroneously recommend a sub-optimal decision to a subgroup.
As motivated by the comparison between (1) and (3), we formalize the policy improvement idea following Kallus and Zhou (2018). Note that minimizing the worst-case possible regret against a baseline policy D0{\mathcal{D}}_0D0 would improve upon those individuals for whom D0(X)=−1,L(X)>0{\mathcal{D}}_0(X)=-1, {\mathcal{L}}(X)>0D0(X)=−1,L(X)>0 and D0(X)=1,U(X)<0{\mathcal{D}}_0(X)=1, {\mathcal{U}}(X)<0D0(X)=1,U(X)<0. We revisit the real data example in Section 4. We first run a standard analysis (random forest: YYY on X,AX,AX,A) and obtain D0(X)=sign{Pr(Y∣X,A=1)−Pr(Y∣X,A=−1)}{\mathcal{D}}_0(X)=\text{sign}\{\Pr(Y|X,A=1)-\Pr(Y|X,A=-1)\}D0(X)=sign{Pr(Y∣X,A=1)−Pr(Y∣X,A=−1)}; among 3,010 subjects, 2,106 have D0(X)=1{\mathcal{D}}_0(X)=1D0(X)=1 and 904 have D0(X)=−1{\mathcal{D}}_0(X)=-1D0(X)=−1. Then we calculate IV conditional average treatment effect bounds, and there are 323 subjects with L(X)>0{\mathcal{L}}(X)>0L(X)>0 and 45 subjects with U(X)<0{\mathcal{U}}(X)<0U(X)<0. Then we use IV bounds as a sanity check/improvement: Only 444 subjects with D0(X)=−1{\mathcal{D}}_0(X)=-1D0(X)=−1 switch to 111, and 888 subjects with D0(X)=1{\mathcal{D}}_0(X)=1D0(X)=1 switch to −1-1−1. Therefore, for most subjects in this application, the IV bounds do not necessarily invalidate the standard regression analysis, while IV bounds are still helpful to validate/invalidate decisions for a subgroup.
In this article, we illustrated how one might pursue individualized decision-making using partial identification in a comprehensive manner. We established a formal link between individualized decision-making under partial identification and classical decision theory by considering a lower bound perspective of value/utility function. Building on this unified framework, we provided a novel minimax solution for opportunists who are unwilling to lose. We also pointed out that there is a mismatch between maximin utility and minimax regret. Moreover, we provided an interesting paradox to ground several interesting ideas on individualized decision-making and unmeasured confounding. To conclude, we list the following points that might be worth considering in future research.
As the proper use of multiple IVs is of growing interest in a lot of applications including statistical genetics studies, one could possibly construct multiple IVs and then try to find multiple bounds to conduct a better sanity check or improvement. Another possibility is to strengthen multiple IVs (Ertefaie et al., 2018; Zubizarreta et al., 2013). A stronger IV might provide a tighter bound, and therefore a sign identification may be achieved (Cui & Tchetgen Tchetgen, 2021b).
Including additional covariates which are associated with AAA or YYY for stratification and then marginalizing over these covariates would potentially give a tighter bound. Therefore, carefully choosing variables used to stratify (which can be the same as decision variables or a larger set of variables) might be of interest for both theoretical and practical purposes.
The proposed minimax regret method by leveraging a randomization scheme and other strategies in Table 1 might be of interest in optimal control settings such as reinforcement learning and contextual bandit where exploitation and exploration are under consideration. In addition, given observational data in which a potential IV is available, one can use different strategies to construct an initial randomized policy for use in a reinforcement learning and bandit algorithm.
One important difference between decision-making with IV partial identification and classical decision theory is the source of uncertainty. For the former one, unmeasured confounding creates uncertainty, and overthinking confounding might create overwhelming uncertainty. Therefore, to better assess the uncertainty, it would also be of great interest to formalize a sensitivity analysis procedure for point identification such as under assumptions of no unmeasured confounding or no unmeasured common effect modifiers (Cui & Tchetgen Tchetgen, 2021c). A similar question has also been raised by Han (2021).
The author is thankful to three referees, associate editor, and Editor-in-Chief for useful comments, which led to an improved manuscript.
The author is supported by NUS grant R-155-000-229-133.
Angrist, J. D., Imbens, G. W., & Rubin, D. B. (1996). Identification of causal effects using instrumental variables. Journal of the American Statistical Association, 91(434), 444–455. https://doi.org/10.2307/2291629
Arrow, K. J., & Hurwicz, L. (1972). An optimality criterion for decision-making under ignorance. Uncertainty and Expectations in Economics (Oxford).
Athey, S., & Wager, S. (2021). Policy learning with observational data. Econometrica, 89(1), 133–161. https://doi.org/10.3982/ECTA15732
Balke, A., & Pearl, J. (1997). Bounds on treatment effects from studies with imperfect compliance. Journal of the American Statistical Association, 92(439), 1171–1176. https://doi.org/10.1080/01621459.1997.10474074
Card, D. (1993). Using geographic variation in college proximity to estimate the return to schooling. National Bureau of Economic Research.
Chakraborty, B., & Moodie, E. (2013). Statistical methods for dynamic treatment regimes. Springer. https://doi.org/10.1007/978-1-4614-7428-9
Cui, Y., & Tchetgen Tchetgen, E. (2021a). Machine intelligence for individualized decision making under a counterfactual world: A rejoinder. Journal of the American Statistical Association, 116(533), 200–206. https://doi.org/10.1080/01621459.2021.1872580
Cui, Y., & Tchetgen Tchetgen, E. (2021b). On a necessary and sufficient identification condition of optimal treatment regimes with an instrumental variable. Statistics & Probability Letters, 178, Article 109180. https://doi.org/10.1016/j.spl.2021.109180
Cui, Y., & Tchetgen Tchetgen, E. (2021c). A semiparametric instrumental variable approach to optimal treatment regimes under endogeneity (with discussion). Journal of the American Statistical Association, 116(533), 162–173. https://doi.org/10.1080/01621459.2020.1783272
Ertefaie, A., Small, D. S., & Rosenbaum, P. R. (2018). Quantitative evaluation of the trade-off of strengthened instruments and sample size in observational studies. Journal of the American Statistical Association, 113(523), 1122–1134. https://doi.org/10.1080/01621459.2017.1305275
Greenland, S. (2000). An introduction to instrumental variables for epidemiologists. International Journal of Epidemiology, 29(4), 722–729. https://doi.org/10.1093/ije/29.4.722
Han, S. (2019). Optimal dynamic treatment regimes and partial welfare ordering. arXiv. https://doi.org/10.48550/arXiv.1912.10014
Han, S. (2020). Identification in nonparametric models for dynamic treatment effects. Journal of Econometrics, 225(2), 132–147. https://doi.org/10.1016/j.jeconom.2019.08.014
Han, S. (2021). Comment: Individualized treatment rules under endogeneity. Journal of the American Statistical Association, 116(533), 192–195. https://doi.org/10.1080/01621459.2020.1831923
Hernan, M., & Robins, J. (2006). Instruments for causal inference: An epidemiologist's dream? Epidemiology (Cambridge, Mass.), 17(4), 360–372. https://doi.org/10.1097/01.ede.0000222409.00878.37
Imbens, G. W., & Angrist, J. D. (1994). Identification and estimation of local average treatment effects. Econometrica, 62(2), 467–475. https://doi.org/10.2307/2951620
Kallus, N., Mao, X., & Zhou, A. (2019). Interval estimation of individual-level causal effects under unobserved confounding. In K. Chaudhuri & M. Sugiyama (Eds.), Proceedings of machine learning research: Vol. 89. Proceedings of the twenty-second international conference on artificial intelligence and statistics (pp. 2281–2290). http://proceedings.mlr.press/v89/kallus19a.html
Kallus, N., & Zhou, A. (2018). Confounding-robust policy improvement. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in neural information processing systems (Vol. 31). Curran Associates, Inc. https://proceedings.neurips.cc/paper/2018/file/3a09a524440d44d7f19870070a5ad42f-Paper.pdf
Kosorok, M. R., & Laber, E. B. (2019). Precision medicine. Annual Review of Statistics and Its Application, 6(1), 263–286. https://doi.org/10.1146/annurev-statistics-030718-105251
Liaw, A., & Wiener, M. (2002). Classification and regression by randomForest. R News, 2(3), 18–22. https://cran.r-project.org/doc/Rnews/
Manski, C. F. (1990). Nonparametric bounds on treatment effects. The American Economic Review, 80(2), 319–323.
Manski, C. F., & Pepper, J. V. (2000). Monotone instrumental variables: With an application to the returns to schooling. Econometrica, 68, 997–1010. https://doi.org/10.3386/t0224
Murphy, S. A. (2003). Optimal dynamic treatment regimes. Journal of the Royal Statistical Society: Series B, 65(2), 331–355. https://doi.org/10.1111/1467-9868.00389
Murphy, S. A., van der Laan, M. J., Robins, J. M., & Group, C. P. P. R. (2001). Marginal mean models for dynamic regimes. Journal of the American Statistical Association, 96(456), 1410–1423. https://doi.org/10.1198/016214501753382327
Okui, R., Small, D. S., Tan, Z., & Robins, J. M. (2012). Doubly robust instrumental variable regression. Statistica Sinica, 22(1), 173–205. https://doi.org/10.5705/ss.2009.265
Pu, H., & Zhang, B. (2021). Estimating optimal treatment rules with an instrumental variable: A partial identification learning approach. Journal of the Royal Statistical Society: Series B, 83(2), 318–345. https://doi.org/10.1111/rssb.12413
Qian, M., & Murphy, S. A. (2011). Performance guarantees for individualized treatment rules. Annals of Statistics, 39(2), 1180–1210. https://doi.org/10.1214/10-AOS864
Qiu, H., Carone, M., Sadikova, E., Petukhova, M., Kessler, R. C., & Luedtke, A. (2021a). Optimal individualized decision rules using instrumental variable methods (with discussion). Journal of the American Statistical Association, 116(533), 174–191. https://doi.org/10.1080/01621459.2020.1745814
Qiu, H., Carone, M., Sadikova, E., Petukhova, M., Kessler, R. C., & Luedtke, A. (2021b). Rejoinder: Optimal individualized decision rules using instrumental variable methods. Journal of the American Statistical Association, 116(533), 207–209. https://doi.org/10.1080/01621459.2020.1865166
Robins, J. M. (1989). The analysis of randomized and non-randomized AIDS treatment trials using a new approach to causal inference in longitudinal studies. US Public Health Service.
Robins, J. M. (2004). Optimal structural nested models for optimal sequential decisions. In Proceedings of the Second Seattle Symposium in Biostatistics (pp. 189–326). https://doi.org/10.1007/978-1-4419-9076-1_11
Rubin, D. B., & van der Laan, M. J. (2012). Statistical issues and limitations in personalized medicine research with clinical trials. The International Journal of Biostatistics, 8(1), 18. https://doi.org/10.1515/1557-4679.1423
Swanson, S. A., Hernán, M. A., Miller, M., Robins, J. M., & Richardson, T. S. (2018). Partial identification of the average treatment effect using instrumental variables: Review of methods for binary instruments, treatments, and outcomes. Journal of the American Statistical Association, 113(522), 933–947. https://doi.org/10.1080/01621459.2018.1434530
Tan, Z. (2006). Regression and weighting methods for causal inference using instrumental variables. Journal of the American Statistical Association, 101(476), 1607–1618. https://doi.org/10.1198/016214505000001366
Tsiatis, A. A., Davidian, M., Holloway, S. T., & Laber, E. B. (2019). Dynamic treatment regimes: Statistical methods for precision medicine. CRC Press. https://doi.org/10.1201/9780429192692
Wang, L., Robins, J. M., & Richardson, T. S. (2017). On falsification of the binary instrumental variable model. Biometrika, 104(1), 229–236. https://doi.org/10.1093/biomet/asw064
Wang, L., & Tchetgen Tchetgen, E. (2018). Bounded, efficient and multiply robust estimation of average treatment effects using instrumental variables. Journal of the Royal Statistical Society: Series B, 80(3), 531–550. https://doi.org/10.1111/rssb.12262
Yadlowsky, S., Namkoong, H., Basu, S., Duchi, J., & Tian, L. (2018). Bounds on the conditional and average treatment effect with unobserved confounding factors. arXiv. https://doi.org/10.48550/arXiv.1808.09521
Zhang, B., Tsiatis, A. A., Davidian, M., Zhang, M., & Laber, E. (2012). Estimating optimal treatment regimes from a classification perspective. Stat, 1(1), 103–114. https://doi.org/10.1002/sta.411
Zhang, B., Tsiatis, A. A., Laber, E. B., & Davidian, M. (2012). A robust method for estimating optimal treatment regimes. Biometrics, 68(4), 1010–1018. https://doi.org/10.1111/j.1541-0420.2012.01763.x
Zhang, B., & Pu, H. (2021). Discussion of Cui and Tchetgen Tchetgen (2020) and Qiu et al. (2020). Journal of the American Statistical Association, 116(533), 196–199. https://doi.org/10.1080/01621459.2020.1832500
Zhao, Y., Zeng, D., Rush, A. J., & Kosorok, M. R. (2012). Estimating individualized treatment rules using outcome weighted learning. Journal of the American Statistical Association, 107(499), 1106–1118. https://doi.org/10.1080/01621459.2012.695674
Zubizarreta, J. R., Small, D. S., Goyal, N. K., Lorch, S., & Rosenbaum, P. R. (2013). Stronger instruments via integer programming in an observational study of late preterm birth outcomes. The Annals of Applied Statistics, 7(1), 25–50. https://doi.org/10.1214/12-AOAS582
Appendix A. Derivation of Lower Bounds of Value Function
The following was originally derived in Cui and Tchetgen Tchetgen (2021c). It is helpful to provide it here.
Proof. Note that
E[YD(X)∣X]=E(Y1∣X)I{D(X)=1}+E(Y−1∣X)I{D(X)=−1},E[YD(X)∣X]=E(Y1−Y−1∣X)I{D(X)=1}+E(Y−1∣X),E[YD(X)∣X]=E(Y−1−Y1∣X)I{D(X)=−1}+E(Y1∣X).\begin{aligned} E\left[ Y_{\mathcal{D}(X)}|X\right] &=E\left( Y_{1}|X\right) I\left\{ \mathcal{D}(X)=1\right\} +E\left( Y_{-1}|X\right)I\left\{ \mathcal{D}(X)=-1\right\},\\ E\left[ Y_{\mathcal{D}(X)}|X\right] &=E\left( Y_{1}-Y_{-1}|X\right) I\left\{ \mathcal{D}(X)=1\right\} +E\left( Y_{-1}|X\right),\\ E\left[ Y_{\mathcal{D}(X)}|X\right] &=E\left( Y_{-1}-Y_{1}|X\right) I\left\{ \mathcal{D}(X)=-1\right\} +E\left( Y_{1}|X\right).\end{aligned}E[YD(X)∣X]E[YD(X)∣X]E[YD(X)∣X]=E(Y1∣X)I{D(X)=1}+E(Y−1∣X)I{D(X)=−1},=E(Y1−Y−1∣X)I{D(X)=1}+E(Y−1∣X),=E(Y−1−Y1∣X)I{D(X)=−1}+E(Y1∣X).
By L−1(X)≤E(Y−1∣X)≤U−1(X){\mathcal{L}}_{-1}(X)\leq E\left( Y_{-1}|X\right) \leq {\mathcal{U}}_{-1}(X)L−1(X)≤E(Y−1∣X)≤U−1(X) and L1(X)≤E(Y1∣X)≤U1(X){\mathcal{L}}_{1}(X)\leq E\left( Y_{1}|X\right) \leq {\mathcal{U}}_{1}(X)L1(X)≤E(Y1∣X)≤U1(X), one has the following bounds,
(A1) (1−w(X))[L(X)I{D(X)=1}+L−1(X)]+w(X)[−U(X)I{D(X)=−1}+L1(X)]≤L1(X)I{D(X)=1}+L−1(X)I{D(X)=−1}≤E[YD(X)∣X],(A1)\ \begin{aligned} &(1-w(X)) [\mathcal{L}\left( X\right) I\left\{ \mathcal{D}(X)=1\right\} +\mathcal{L}_{-1}\left( X\right)] + w(X) [ -\mathcal{U}\left( X\right) I\left\{ \mathcal{D}(X)=-1\right\} +\mathcal{L}_{1}\left( X\right)] \\ &\leq \mathcal{L}_{1}(X) I\{{\mathcal{D}}(X)=1\} +\mathcal{L}_{-1}(X) I\{{\mathcal{D}}(X)=-1\} \leq E\left[ Y_{\mathcal{D} (X)}|X\right], \end{aligned}(A1) (1−w(X))[L(X)I{D(X)=1}+L−1(X)]+w(X)[−U(X)I{D(X)=−1}+L1(X)]≤L1(X)I{D(X)=1}+L−1(X)I{D(X)=−1}≤E[YD(X)∣X],
where 0≤w(x)≤10 \leq w(x)\leq 10≤w(x)≤1 for any xxx. Therefore, we complete the proof by taking expectations on both sides of Equation A1.
Appendix B. Minimum Utility, Maximum Regret, and Maximum Misclassification Rate of Several 'Optimal' Rules
We give the minimum value function, maximum regret, and maximum misclassification rate over D∈Dopt{\mathcal{D}}\in {\mathcal{D}}^{opt}D∈Dopt expressed in terms of the observed data:
E[max(L−1(X),L1(X))I{0∉(L(X),U(X))}+min(L−1(X),L1(X))I{0∈(L(X),U(X))}],E[max(∣L(X)∣,∣U(X)∣)I{0∈(L(X),U(X))}],E[I{0∈(L(X),U(X))}],\begin{aligned} & E[\max({\mathcal{L}}_{-1}(X),{\mathcal{L}}_{1}(X)) I\{0\notin ({\mathcal{L}}(X),{\mathcal{U}}(X))\}+\min({\mathcal{L}}_{-1}(X),{\mathcal{L}}_{1}(X)) I\{0\in ({\mathcal{L}}(X),{\mathcal{U}}(X))\}],\\ & E[\max(|{\mathcal{L}}(X)|,|{\mathcal{U}}(X)|) I\{0\in ({\mathcal{L}}(X),{\mathcal{U}}(X))\}],\\ & E[I\{0\in ({\mathcal{L}}(X),{\mathcal{U}}(X))\}],\end{aligned}E[max(L−1(X),L1(X))I{0∈/(L(X),U(X))}+min(L−1(X),L1(X))I{0∈(L(X),U(X))}],E[max(∣L(X)∣,∣U(X)∣)I{0∈(L(X),U(X))}],E[I{0∈(L(X),U(X))}],
respectively. While the maximum misclassification rate remains the same, the minimum value function and maximum regret for a given D{\mathcal{D}}D can be different. For instance, the minimum value function and maximum regret of the maximin rule in Table 1 are:
E[max(L−1(X),L1(X))],E[[∣L(X)∣I{L−1(X)<L1(X)}+∣U(X)∣I{L−1(X)>L1(X)}]I{0∈(L(X),U(X))}],\begin{aligned} & E[\max({\mathcal{L}}_{-1}(X),{\mathcal{L}}_{1}(X))],\\ & E\Big[ \big[|{\mathcal{L}}(X)|I\{{\mathcal{L}}_{-1}(X)<{\mathcal{L}}_{1}(X)\} + |{\mathcal{U}}(X)|I\{{\mathcal{L}}_{-1}(X)>{\mathcal{L}}_{1}(X)\}\big]I\{0\in ({\mathcal{L}}(X),{\mathcal{U}}(X))\}\Big],\end{aligned}E[max(L−1(X),L1(X))],E[[∣L(X)∣I{L−1(X)<L1(X)}+∣U(X)∣I{L−1(X)>L1(X)}]I{0∈(L(X),U(X))}],
respectively. The minimum value function and maximum regret of the minimax rule in Table 1 are:
E[max(L−1(X),L1(X))I{0∉(L(X),U(X))} +[L1(X)I{∣L(X)∣<∣U(X)∣}+L−1(X)I{∣L(X)∣>∣U(X)∣}]I{0∈(L(X),U(X))}],E[min(∣L(X)∣,∣U(X)∣)I{0∈(L(X),U(X))}],\begin{aligned} & E\Big[\max({\mathcal{L}}_{-1}(X),{\mathcal{L}}_{1}(X)) I\{0\notin ({\mathcal{L}}(X),{\mathcal{U}}(X))\}\\ & ~~~~ +\big[{\mathcal{L}}_{1}(X) I\{|{\mathcal{L}}(X)|<|{\mathcal{U}}(X)|\} + {\mathcal{L}}_{-1}(X)I\{|{\mathcal{L}}(X)|>|{\mathcal{U}}(X)|\} \big]I\{0\in ({\mathcal{L}}(X),{\mathcal{U}}(X))\}\Big],\\ & E[\min(|{\mathcal{L}}(X)|,|{\mathcal{U}}(X)|) I\{0\in ({\mathcal{L}}(X),{\mathcal{U}}(X))\}],\end{aligned}E[max(L−1(X),L1(X))I{0∈/(L(X),U(X))} +[L1(X)I{∣L(X)∣<∣U(X)∣}+L−1(X)I{∣L(X)∣>∣U(X)∣}]I{0∈(L(X),U(X))}],E[min(∣L(X)∣,∣U(X)∣)I{0∈(L(X),U(X))}],
respectively. The minimum value function and maximum regret of the randomized minimax rule in Section 5.3 are:
E[max(L−1(X),L1(X))I{0∉(L(X),U(X))} +[L1(X)U(X)U(X)−L(X)+L−1(X)−L(X)U(X)−L(X)]I{0∈(L(X),U(X))}],E[−L(X)U(X)U(X)−L(X)I{0∈(L(X),U(X))}],\begin{aligned} & E\bigg[\max({\mathcal{L}}_{-1}(X),{\mathcal{L}}_{1}(X)) I\{0\notin ({\mathcal{L}}(X),{\mathcal{U}}(X))\}\\ & ~~~~ +\left[{\mathcal{L}}_{1}(X) \frac{{\mathcal{U}}(X)}{{\mathcal{U}}(X)-{\mathcal{L}}(X)} + {\mathcal{L}}_{-1}(X)\frac{-{\mathcal{L}}(X)}{{\mathcal{U}}(X)-{\mathcal{L}}(X)} \right]I\{0\in ({\mathcal{L}}(X),{\mathcal{U}}(X))\}\bigg],\\ & E\left[ -\frac{{\mathcal{L}}(X){\mathcal{U}}(X)}{{\mathcal{U}}(X)-{\mathcal{L}}(X)} I\{0\in ({\mathcal{L}}(X),{\mathcal{U}}(X))\}\right],\end{aligned}E[max(L−1(X),L1(X))I{0∈/(L(X),U(X))} +[L1(X)U(X)−L(X)U(X)+L−1(X)U(X)−L(X)−L(X)]I{0∈(L(X),U(X))}],E[−U(X)−L(X)L(X)U(X)I{0∈(L(X),U(X))}],
respectively.
©2021 Yifan Cui. This article is licensed under a Creative Commons Attribution (CC BY 4.0) International license, except where otherwise indicated with respect to particular material included in the article.
Harvard Data Science Review | CommonCrawl |
\begin{document}
\begin{frontmatter}
\title{Asymptotic associate primes}
\author{Dipankar Ghosh} \ead{[email protected]}
\address{Chennai Mathematical Institute, H1, SIPCOT IT Park, Siruseri, Kelambakkam, Chennai 603103, Tamil Nadu, India}
\author{Provanjan Mallick} \ead{[email protected]}
\author{Tony J. Puthenpurakal\corref{mycorrespondingauthor}} \ead{[email protected]}
\address{Department of Mathematics, Indian Institute of Technology Bombay, Powai, Mumbai 400076, India}
\cortext[mycorrespondingauthor]{Corresponding author}
\begin{abstract} We investigate three cases regarding asymptotic associate primes. First, assume $ (A,\mathfrak{m}) $ is an excellent Cohen-Macaulay (CM) non-regular local ring, and $ M = \operatorname{Syz}^A_1(L) $ for some maximal CM $ A $-module $ L $ which is free on the punctured spectrum. Let $ I $ be a normal ideal. In this case, we examine when $ \mathfrak{m} \notin \operatorname{Ass}(M/I^nM) $ for all $ n \gg 0 $. We give sufficient evidence to show that this occurs rarely. Next, assume that $ (A,\mathfrak{m}) $ is excellent Gorenstein non-regular isolated singularity, and $ M $ is a CM $ A $-module with $\operatorname{projdim}_A(M) = \infty $ and $ \dim(M) = \dim(A) -1 $. Let $ I $ be a normal ideal with analytic spread $ l(I) < \dim(A) $. In this case, we investigate when $\mathfrak{m} \notin \textrm{Ass} \operatorname{Tor}^A_1(M, A/I^n)$ for all $n \gg 0$. We give sufficient evidence to show that this also occurs rarely. Finally, suppose $ A $ is a local complete intersection ring. For finitely generated $ A $-modules $ M $ and $ N $, we show that if $ \operatorname{Tor}_i^A(M, N) \neq 0 $ for some $ i > \dim(A) $, then there exists a non-empty finite subset $ \mathcal{A} $ of $ \operatorname{Spec}(A) $ such that for every $ \mathfrak{p} \in \mathcal{A} $, at least one of the following holds true: (i) $ \mathfrak{p} \in \operatorname{Ass}\left( \operatorname{Tor}_{2i}^A(M, N) \right) $ for all $ i \gg 0 $; (ii) $ \mathfrak{p} \in \operatorname{Ass}\left( \operatorname{Tor}_{2i+1}^A(M, N) \right) $ for all $ i \gg 0 $. We also analyze the asymptotic behaviour of $\operatorname{Tor}^A_i(M, A/I^n)$ for $i,n \gg 0$ in the case when $I$ is principal or $I$ has a principal reduction generated by a regular element. \end{abstract}
\begin{keyword} Asymptotic associate primes; Asymptotic grade; Associated graded rings and modules; Local cohomology; Tor; Complete intersections \MSC[2010] Primary 13A17, 13A30, 13D07; Secondary 13A15, 13H10 \end{keyword}
\end{frontmatter}
\section{Introduction}
In this paper, we investigate three cases regarding asymptotic associate primes. We will introduce it one by one.
\textbf{I:} Let $(A,\mathfrak{m})$ be a Noetherian local ring of dimension $d$, and let $M$ be a finitely generated $A$-module. By a result of Brodmann \cite{Mb0}, there exists $n_0$ such that the set of associate primes $\operatorname{Ass}_A(M/I^nM)=\operatorname{Ass}_A(M/I^{n_0}M)$ for all $n\geqslant n_0$. We denote this eventual constant set by $\operatorname{Ass}^{\infty}_I(M)$.
A natural question is when does $\mathfrak{m} \in \operatorname{Ass}^{\infty}_I(M)$, or the opposite $\mathfrak{m} \notin \operatorname{Ass}^{\infty}_I(M) $? In general, this question is hopeless to resolve. So well make try to make a few assumptions which are quite general but still amenable to answer our question.
(I.1) We first assume that $(A,\mathfrak{m})$ is an excellent Cohen-Macaulay local ring with infinite residue field. We note that this assumption is quite general.
(I.2) We also assume that $M$ is maximal Cohen-Macaulay (MCM). In fact in the study of modules over Cohen-Macaulay rings the class of MCM modules is the most natural class to investigate. To keep things interesting we also assume $M$ is not free. In particular we are assuming $A$ is also not regular.
We note that in general the answer to the question on when does $\mathfrak{m} \in \operatorname{Ass}^{\infty}_I(A)$ is \emph{not} known. However, by results of Ratliff, McAdam, a positive answer is known when $I$ is normal, i.e., $I^n$ is integrally closed for all $n \geqslant 1$. In this case, it is known that $\mathfrak{m} \in \operatorname{Ass}^{\infty}_I(A)$ if and only if $ l(I)$, the analytic spread of $I$ is equal to $d = \dim A$; see \cite[4.1]{Mcada06}. So our third assumption is
(I.3) $I$ is a normal ideal of height $ \geqslant 2$.
Before proceeding further, we want to remark that in analytically unramified local rings (i.e., its completion is reduced), there exist plenty of normal ideals. In fact, for any ideal $ I $, it is not terribly difficult to prove that for all $n \gg 0$, the ideal $ \overline{I^n} $ is normal (where $\overline{J}$ denotes the integral closure of an ideal $J$).
Finally, we note that as we are only interested in the question on whether $\mathfrak{m} \in \operatorname{Ass}^{\infty}_I(M)$ or not, it is convenient to assume
(I.4) $M$ is free on the punctured spectrum, i.e., $M_P$ is free for every prime ideal $P \neq \mathfrak{m} $.
We note that (I.4) is automatic if $A$ is an isolated singularity, i.e., $A_P$ is regular for every prime $P \neq \mathfrak{m} $. In general (even when $A$ is not an isolated singularity), any sufficiently high syzygy of a finite length module (of infinite projective dimension) will be free on the punctured spectrum.
\begin{remark}
As discussed above, our hypotheses are satisfied by a large class of rings, modules and ideals. \end{remark}
Before stating our results, we need to introduce a few notation. Let $G_I(A) = \bigoplus_{n \geqslant 0}I^n/I^{n+1}$ be the associated graded ring of $A$ with respect to $I$. Let $G_I(A)_+ = \bigoplus_{n \geqslant 1} I^n/I^{n+1}$ be its irrelevant ideal. If $M$ is an $A$-module, then $G_I(M) = \bigoplus_{n \geqslant 0} I^nM/I^{n+1}M$ is the associated graded module of $M$ with respect to $I$ (and considered as an $G_I(A)$-module).
Our first result is
\begin{theorem}\label{first}With assumptions as in {\rm I.1, I.2, I.3 and I.4}, suppose that $d \geqslant 3$, and that $M = \operatorname{Syz}^A_1(L)$ for some MCM $A$-module $L$. We have that
\[
\text{if} \ \mathfrak{m} \notin \operatorname{Ass}^{\infty}_I(M), \text{ then } \operatorname{grade}( G_{I^n}(A)_+, G_{I^n}(M)) \geqslant 2 \text{ for all } n \gg 0.
\] \end{theorem}
We note that the assumption $M = \operatorname{Syz}^A_1(L)$ for an MCM $ A $-module $L$ is automatically satisfied if $A$ is Gorenstein.
We now describe the significance of Theorem~\ref{first}. The third author of this paper has worked extensively on associated graded rings and modules. He feels that the condition $ \operatorname{grade}( G_{I^n}(A)_+, G_{I^n}(M)) \geqslant 2 $ for all $ n \gg 0 $ is quite special. In `most cases' we will have only $ \operatorname{grade}( G_{I^n}(A)_+, G_{I^n}(M)) = 1 $ for all $ n \gg 0 $. So Theorem~\ref{first} implies that in `most cases' we should have $ \mathfrak{m} \in \operatorname{Ass}^{\infty}_I(M) $.
By a result of Melkersson and Schenzel \cite[Theorem~1]{MS}, it is known that for a finitely generated $A$-module $E$, the set $\operatorname{Ass}_A\left( \operatorname{Tor}^A_1(E, A/I^n) \right)$ is constant for all $ n \gg 0 $. We denote this stable value by $T^\infty_1(I, E)$. We note that if $L$ is a (non-free) MCM $A$-module which is free on the punctured spectrum of $A$, then $\operatorname{Tor}_1^A(L, A/I^n)$ has finite length for all $n \geqslant 1$. In this case, $\mathfrak{m} \notin T^\infty_1(I, L)$ if and only if $\operatorname{Tor}^A_1(L, A/I^n) = 0$ for all $n \gg 0$. If $\mathfrak{m} \notin \operatorname{Ass}^\infty_I(A)$ (this holds if $I$ is normal and $l(I) < d$), then it is easy to see that $\operatorname{Tor}^A_1(L, A/I^n) = 0$ for all $n \gg 0$ if and only if $ \mathfrak{m} \notin \operatorname{Ass}^\infty_I(\operatorname{Syz}^A_1(L)) $. If the latter holds, then by Theorem~\ref{first}, we will have $\operatorname{grade}( G_{I^n}( A)_+, G_{I^n}(\operatorname{Syz}^A_1(L)) \geqslant 2$ for all $ n \gg 0 $. Thus another significance of Theorem \ref{first} is that it suggests that in `most cases' if $I$ is a normal ideal with height $ \geqslant 2$ and $ l(I) < d $, then for a non-free MCM module $ L $, we should have $\operatorname{Tor}^A_1(L, A/I^n) \neq 0$ for all $n \gg 0$.
\textbf{II:} In the previous subsection, we consider $T^\infty_1(I, M)$ when $M$ is MCM and locally free on the punctured spectrum. In this subsection, we consider the case when $M$ is Cohen-Macaulay of dimension $d -1$. A trivial case to consider is when $ \operatorname{projdim}_A(M) $ is finite. Then it is easy to see that if $ \mathfrak{m} \notin \operatorname{Ass}^\infty_I(A) $, then $ \mathfrak{m} \notin T^\infty_1(I, M) $.
If projective dimension of $M$ is infinite, then we are unable to analyze $T^\infty_1(I, M)$ for arbitrary Cohen-Macaulay rings. However, we have made progress in this question when $A$ is an isolated Gorenstein singularity.
In general, when a local ring $(R,\mathfrak{n})$ is Gorenstein, and $D$ is a finitely generated $R$-module, there exists an MCM approximation of $D$, i.e., an exact sequence $s \colon 0 \rightarrow Y \rightarrow X \rightarrow D \rightarrow 0$, where $X$ is an MCM $R$-module and $\operatorname{projdim}_R(Y) < \infty$. It is known that if $s^\prime \colon 0 \rightarrow Y^\prime \rightarrow X^\prime \rightarrow D \rightarrow 0$ is another MCM approximation of $D$, then $X$ and $X^\prime$ are stably isomorphic, i.e., there exist finitely generated free $R$-modules $F, G$ such that $X\oplus F \cong X^\prime \oplus G$. It is clear that $X$ is free if and only if $\operatorname{projdim}_R(D) < \infty$. Thus if $\operatorname{projdim}_R(D) = \infty$, then $\operatorname{Syz}^R_1(X)$ is an invariant of $D$.
Our second result is
\begin{theorem}[= \ref{sir2}]\label{second}
Let $ (A,\mathfrak{m}) $ be an excellent Gorenstein local ring of dimension $ d \geqslant 3 $. Suppose $ A $ has isolated singularity. Let $I$ be a normal ideal of $A$ with $\operatorname{height}(I)\geqslant2$ and $l(I)<d$. Let $M$ be a Cohen-Macaulay $A$-module of dimension $d-1$ and $\operatorname{projdim}_A(M)=\infty$. Let $s \colon 0 \rightarrow Y \rightarrow X \rightarrow M \rightarrow 0$ be an MCM approximation of $M$. Set $ N := \operatorname{Syz}^A_1(X) $. Then the following statements are equivalent:
\begin{enumerate}[{\rm (i)}]
\item $ \mathfrak{m} \notin T^\infty_1(I, M) $ {\rm(}i.e., $ \mathfrak{m} \notin \operatorname{Ass}_A(\operatorname{Tor}^A_1(M,A/{I^n})) $ for all $ n \gg 0 ${\rm )}.
\item $ \mathfrak{m} \notin \operatorname{Ass}^\infty_I(N) $ {\rm (}equivalently, $ \operatorname{depth}(N/{I^nN}) \geqslant 1 $ for all $ n \gg 0 ${\rm )}.
\end{enumerate}
Furthermore, if this holds true, then $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(N)) \geqslant 2 $ for all $ n \gg 0 $. \end{theorem}
As per our discussion after Theorem~\ref{first}, it follows that in `most cases' we should have $ \mathfrak{m} \in \operatorname{Ass}_A(\operatorname{Tor}^A_1(M,A/{I^n}))~\mbox{for all } n\gg0$.
\textbf{III$\alpha$:} Let $(A,\mathfrak{m})$ be a local complete intersection ring of codimension $c$. Let $M$ and $N$ be finitely generated $A$-modules. Set \[ E(M,N) := \bigoplus_{i \geqslant 0}\operatorname{Ext}^i_A(M,N), \quad \mbox{and} \quad T(M,N) := \bigoplus_{i \geqslant 0}\operatorname{Tor}^A_i(M,N). \] It is well-known that $ E(M,N)\otimes_A \widehat{A} $ and $ T(M,N)\otimes_A \widehat{A} $ are modules over a ring of cohomology operators $S := \widehat{A}[\xi_1,\ldots, \xi_c]$, where $ \widehat{A} $ is the $ \mathfrak{m} $-adic completion of $ A $. Moreover, $E(M,N)\otimes_A \widehat{A}$ is a finitely generated graded $S$-module. But the $S$-module $T(M,N)\otimes_A \widehat{A}$ is very rarely finitely generated. However, by a result of Gulliksen \cite[Theorem~3.1]{G}, if $\operatorname{Tor}_i^A(M,N)$ has finite length for all $i \gg 0$ (say from $i \geqslant i_0$), then the $ S $-submodule \[
T_{\geqslant i_0}(M,N) \otimes_A \widehat{A} \;\; := \; \bigoplus_{i \geqslant i_0}\operatorname{Tor}^{\widehat{A}}_i\big(\widehat{M},\widehat{N}\big) \quad \mbox{is *Artinian.} \]
By standard arguments, for each $ l = 0, 1 $, it follows that $ \operatorname{Ass}_A(\operatorname{Ext}^{2i+l}_A(M,N)) $ is a constant set for all $ i \gg 0 $. However, we do not have a similar result for Tor. By a result of Avramov and Buchweitz (Theorem~\ref{theorem: vanishing of Tor}), the case when $\operatorname{Tor}^A_i(M, N) = 0$ for all $i \gg 0$ is well-understood. Our third result is that
\begin{theorem}[$ = $ \ref{corollary: asymptotic Ass on Tor}]\label{third-alpha}
Let $ A $ be a local complete intersection ring. Let $ M $ and $ N $ be finitely generated $ A $-modules. Assume that $ \operatorname{Tor}_i^A(M, N) \neq 0 $ for some $ i > \dim(A) $. Then there exists a non-empty finite subset $ \mathcal{A} $ of $ \operatorname{Spec}(A) $ such that for every $ \mathfrak{p} \in \mathcal{A} $, at least one of the following statements holds true:
\begin{enumerate}[{\rm (i)}]
\item $ \mathfrak{p} \in \operatorname{Ass}_A\left( \operatorname{Tor}_{2i}^A(M, N) \right) $ for all $ i \gg 0 $;
\item $ \mathfrak{p} \in \operatorname{Ass}_A\left( \operatorname{Tor}_{2i+1}^A(M, N) \right) $ for all $ i \gg 0 $.
\end{enumerate} \end{theorem}
\textbf{III$\beta$:} In \cite[Corollary~4.3]{GP}, the first and the third author proved that if $(A,\mathfrak{m})$ is a local complete intersection ring, $I$ is an ideal of $A$, and $M, N$ are finitely generated $A$-modules, then for every $l = 0, 1$, the set $\operatorname{Ass}_A(\operatorname{Ext}_A^{2i+l}(M, N/I^nN))$ is constant for all $i, n \gg 0$. We do not have a similar result for Tor. It follows from \cite[Theorem~6.1]{GP} that complexity of $N/I^n N$ is stable for all $ n \gg 0 $. Thus, by results of Avramov and Buchweitz, the case when $\operatorname{Tor}^A_i(M, N/I^nN) = 0$ for all $ i, n \gg 0 $ is well-understood. Our final result is
\begin{theorem}[$ = $ \ref{corollary: asymptotic ass: Tor: for special ideals}]\label{third-beta}
Let $ A $ be a local complete intersection ring. Let $ M $ be a finitely generated $ A $-module, and $ I $ be an ideal of $ A $. Suppose either $ I $ is principal or $ I $ has a principal reduction generated by an $ A $-regular element. Then there exist $ i_0 $ and $ n_0 $ such that either $ \operatorname{Tor}^A_i(M, N/I^nN) = 0 $ for all $ i \geqslant i_0 $ and $ n \geqslant n_0 $, or there is a non-empty finite subset $ \mathcal{A} $ of $ \operatorname{Spec}(A) $ such that for every $ \mathfrak{p} \in \mathcal{A} $, at least one of the following statements holds true:
\begin{enumerate}[{\rm (i)}]
\item $ \mathfrak{p} \in \operatorname{Ass}_A\left( \operatorname{Tor}_{2i}^A(M, A/I^n) \right) $ for all $ i \geqslant i_0 $ and $ n \geqslant n_0 $;
\item $ \mathfrak{p} \in \operatorname{Ass}_A\left( \operatorname{Tor}_{2i+1}^A(M, A/I^n) \right) $ for all $ i \geqslant i_0 $ and $ n \geqslant n_0 $.
\end{enumerate} \end{theorem}
\emph{Techniques used to prove our results:} To prove Theorems~\ref{third-alpha} and \ref{third-beta}, we use the well-known technique of Eisenbud operators over resolutions of modules over complete complete-intersection rings. We also use results of Gulliksen and Avramov-Buchweitz stated above.
However, to prove Theorems~\ref{first} and \ref{second}, we use a new technique in the study of asymptotic primes, i.e., we investigate the function \[ \xi_M^I(n) := \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(M)). \] Note that by a result of Elias \cite[Proposition~2.2]{E}, we have that $\operatorname{depth}(G_{I^n}(A))$ is constant for all $n \gg 0$ (and a similar argument works for modules). However, the function $\xi_M^I$ (when $\dim(A/I) > 0$) has not been investigated before, neither in the study of blow-up algebra's or with the connection with associate primes. Regarding $\xi_M^I$, we prove two results: The first is
\begin{theorem}[$ = $ \ref{thm-xi-1}]\label{xi-1}
Let $ (A,\mathfrak{m}) $ be a Noetherian local ring. Let $ I $ be an ideal of $ A $, and $M$ be a finitely generated $ A $-module such that $ \operatorname{grade}(I,M) = g > 0 $. Then either $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(M)) = 1 $ for all $ n \gg 0 $, or
\[
\operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(M)) \geqslant 2 \quad \mbox{for all } \; n \gg 0.
\] \end{theorem}
Our second result in this direction is
\begin{theorem}[$ = $ \ref{thm-xi-2}]\label{xi-2}
Let $ (A,\mathfrak{m}) $ be a Cohen-Macaulay local ring, and $ I $ be an ideal of $ A $ such that $ \operatorname{height}(I) \geqslant \dim(A) - 2 $. Let $ M $ be an MCM $ A $-module. Then $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(M)) $ is constant for all $ n \gg 0 $. \end{theorem}
Although he does not have an example, the third author feels that $ \xi_M^I $ may not be constant for $ n \gg 0 $ if $ \dim(A/I) \geqslant 3 $.
\begin{remark}
In \cite[Theorem~3.4]{Sh99}, Huckaba and Marley proved that if $ A $ is Cohen-Macaulay of dimension $ \geqslant 2 $, and if $ I $ is a normal ideal with $ \operatorname{grade}(I) \geqslant 1 $, then $ \operatorname{depth}(G_{I^n}(A)) \geqslant 2 $ for all $ n \gg 0 $. A crucial ingredient for the proofs of our results is to compute $\operatorname{grade}(G_{I^n}(A)_+,G_{I^n}(A))$ for all $n\gg0$ when $I$ is normal. So we prove the following result. \end{remark}
\begin{theorem}[$ = $ \ref{RR}]\label{normal}
Let $ A $ be an excellent Cohen-Macaulay local ring. Let $ I $ be a normal ideal of $A$ such that $ \operatorname{grade}(I) \geqslant 2 $. Then $ \operatorname{grade}(G_{I^n}(A)_+,G_{I^n}(A)) \geqslant 2 $ for all $ n \gg 0 $. \end{theorem}
We now describe in brief the contents of this paper. In Section~\ref{sec2}, we discuss a few preliminaries on grade and local cohomology that we need. In Section~\ref{sec3}, we investigate the function $\xi_M^I(n) := \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(M))$ and prove Theorems~\ref{xi-1}, \ref{xi-2} and \ref{normal}. In Section~\ref{sec4}, we prove Theorems~\ref{first} and \ref{second}. Finally, in Section~\ref{sec5}, we prove Theorems~\ref{third-alpha} and \ref{third-beta}.
\section{Preliminaries on grade and local cohomology}\label{sec2}
Throughout this article, all rings are assumed to be commutative Noetherian rings with identity. Throughout, let $(A,\mathfrak{m})$ be a local ring of dimension $d$ with infinite residue field, and $M$ be a finitely generated $A$-module. Let $I$ be an ideal of $A$ (which need not be $\mathfrak{m} $-primary). If $p \in M$ is non-zero, and $j$ is the largest integer such that $p\in {I}^{j} M$, then $p^{*}$ denotes the image of $p$ in $I^{j} M/I^{j+1} M$, and let $0^*=0$. Set $\mathcal{R}(I) := \bigoplus_{n\geqslant0} I^nt^n$, the Rees ring, and $\hat{\mathcal{R}}(I) := \bigoplus_{n\in \mathbb{Z}} I^nt^n$ is the extended Rees ring of $A$ with respect to $I$, where $ I^n = A $ for every $ n \leqslant 0 $. Set $\mathcal{R}(I,M) := \bigoplus_{n\geqslant 0} I^n M t^n$, the Rees module, and $\hat{\mathcal{R}}(I,M) := \bigoplus_{n\in \mathbb{Z}} I^nMt^n$ is the extended Rees Module of $M$ with respect to $I$. Let $G_{I}(A) := \bigoplus_{n\geqslant 0} I^{n} /I^{n+1} $ be the associated graded ring of $A$ with respect to $I$, and $G_{I}(M) := \bigoplus_{n\geqslant 0} I^{n} M/I^{n+1}M $ be the associated graded module of $M$ with respect to $I$. Throughout this article, we denote the ideal $\bigoplus_{n\geqslant1} I^{n}t^n$ of $\mathcal{R}(I)$ by $R_+$, and the ideal $\bigoplus_{n\geqslant 1} I^{n} /I^{n+1}$ of $G_I(A)$ by $G_+$. \s Set $L^I(M) := \bigoplus_{n\geqslant0} M/I^{n+1}M$. The $A$-module $L^I(M)$ can be given an $\mathcal{R}(I)$-module structure as follows. The Rees ring $\mathcal{R}(I)$ is a subring of $\hat{\mathcal{R}}(I)$, and $\hat{\mathcal{R}}(I)$ is a subring of $ S := A[t,t^{-1}] $. So $S$ is an $\mathcal{R}(I)$-module. Therefore $M[t,t^{-1}]=\bigoplus_{n\in \mathbb{Z}}Mt^n=M\otimes_A S$ is an $\mathcal{R}(I)$-module. The exact sequence \begin{equation} \label{1} 0 \longrightarrow \hat{\mathcal{R}}(I,M) \longrightarrow M[t,t^{-1}] \longrightarrow L^I(M)(-1) \longrightarrow 0 \end{equation} defines an $ \mathcal{R}(I )$-module structure on $L^I(M)(-1)$, and hence on $L^I(M)$.
The following result is well-known and easy to prove.
\begin{lemma}\label{op}
Let $ R=\bigoplus_{i\geqslant0}R_{i}$ be a graded ring. Let $E$ be a graded $R$-module {\rm (}need not be finitely generated{\rm )}. Then the following statements hold true:
\begin{enumerate}[{\rm (i)}]
\item If $E_{n} = 0 $ for all $n \gg 0$, and there is an injective graded homomorphism $E(-1) \hookrightarrow E $, then $E = 0$.
\item If $E_{n} = 0 $ for all $n \ll 0$, and there is an injective graded homomorphism $E \hookrightarrow E(-1) $, then $E = 0$.
\end{enumerate} \end{lemma} \s Let $ R = \bigoplus_{i \geqslant 0}R_{i} $ be a graded ring. Let $E$ be a graded $R$-module. Let $l$ be a positive integer. The $l$th Veronese subring of $R$ is defined by $R^{<l>} := \bigoplus_{n\geqslant0}R_{nl}$, and the $l$th Veronese submodule of $E$ is defined to be $E^{<l>} := \bigoplus_{n\in \mathbb{Z}}E_{nl}$. It can be observed that $E^{<l>}$ is a graded $R^{<l>}$-module.
\begin{remark}\label{rmkk}{~}
\begin{enumerate}[{\rm (i)}]
\item $L^I(M)(-1)$ behaves well with respect to the Veronese functor. It can be easily checked that
$$L^I(M)(-1)^{<l>} = L^{I^l}(M)(-1).$$
\item \cite[Proposition~2.5]{Sh99} Veronese functor commutes with local cohomology: Let $ J $ be a homogeneous ideal of $ R $. Then, for every $ i \geqslant 0 $, we have
\[
\left( H_J^i(E) \right)^{<l>} \cong H_{J^{<l>}}^i(E^{<l>}) \mbox{ as graded $ R^{<l>} $-modules}.
\]
\end{enumerate} \end{remark}
Although, in general, $ L^I(M) $ is not finitely generated as an $ \mathcal{R}(I) $-module, but it has the following vanishing property.
\begin{lemma}\label{oo}
Suppose $ \operatorname{grade}(I,M) = g > 0 $. Then, for every $0 \leqslant i \leqslant g-1$, $ H^i_{R+}(L^I(M))_n = 0 $ for all $n \gg 0$. \end{lemma}
\begin{proof}
Since $\operatorname{grade}(I,M)=g > 0$, there exists an $M$-regular sequence $x_1,\ldots,x_g$ in $I$. It can be observed that $x_1t,\ldots,x_gt$ $\in \mathcal{R}(I)_1$ becomes an $M[t,t^{-1}]$-regular sequence. So $H^i_{R+}(M[t,t^{-1}])=0$ for $0\leqslant i \leqslant g - 1$. Therefore, in view of the short exact sequence \eqref{1} and using the corresponding long exact sequence in local cohomology, we get that
\begin{align}
H^i_{R+}(L^I(M)(-1)) & \cong H^{i+1}_{R+}(\hat{\mathcal{R}}(I,M)) ~ \mbox{ for $0\leqslant i\leqslant g-2$, and}\label{ik1}\\
H^{g-1}_{R+}(L^I(M)(-1)) & \subseteq H^g_{R+}(\hat{\mathcal{R}}(I,M)).\label{ik2}
\end{align}
Set $U := \bigoplus_{n<0} Mt^n$. Since $U$ is $R_+$ torsion, we have $H^0_{R+}(U)=U$ and $H^i_{R_+}(U)=0$ for all $i \geqslant 1$. Considering the short exact sequence of $\mathcal{R}(I)$-modules
\begin{equation*}
0 \longrightarrow \mathcal{R}(I,M)\longrightarrow \hat{\mathcal{R}}(I,M) \longrightarrow \bigoplus_{n<0} Mt^n \longrightarrow 0,
\end{equation*}
the corresponding long exact sequence in local cohomology yields the exact sequence
\begin{align}
& 0\longrightarrow U\longrightarrow H^1_{R_+}(\mathcal{R}(I,M)) \longrightarrow H^1_{R+}(\hat{\mathcal{R}}(I,M)) \longrightarrow 0, \label{dg1}\\
& \mbox{and } H^i_{R_+}(\mathcal{R}(I,M)) \cong H^i_{R_+}(\hat{\mathcal{R}}(I,M)) \mbox{ for } i \geqslant 2. \label{dg2}
\end{align}
It is well-known that for each $i \geqslant 0$, $H^i_{R_+}(\mathcal{R}(I,M))_n=0$ for all $n\gg0$. Therefore, in view of \eqref{dg1} and \eqref{dg2}, for each $i \geqslant 0$, $H^i_{R_+}(\hat{\mathcal{R}}(I,M))_n = 0$ for all $n \gg 0$. Hence the lemma follows from \eqref{ik1} and \eqref{ik2}. \end{proof} \s The {\it Ratliff-Rush closure} of $ M $ with respect to $ I $ is defined to be \[ \widetilde{I M} := \bigcup_{m \geqslant 1} (I^{m+1}M :_M I^m). \] It is shown in \cite[Proposition~2.2.(iv)]{Rez3} that if $ \operatorname{grade}(I,M) > 0 $, then $\widetilde{I^{n}M}=I^nM$ for all $ n \gg 0 $. This motivates the following definition: \[ \rho^I(M) := \min\{n : \widetilde{I^{i}M}=I^iM ~\mbox{for all} ~i\geqslant n \}. \] We call $\rho^I(M)$ the {\it Ratliff-Rush number} of $M$ with respect to $I$.
\s\label{mi} Let $I = (x_1,\ldots, x_m)$. Set $S := A[X_1,\ldots, X_m]$ with deg$A=0$ and deg$X_i=1$ for $i=1,\ldots ,m$. Then $S = \bigoplus_{n\geqslant 0}S_n$, where $S_n$ is the collection of all homogeneous polynomials of degree $n$. So $A = S_0$. We denote the ideal $\bigoplus_{n\geqslant 1}S_n$ of $S$ by $S_+$. We have a surjective homogeneous homomorphism of $A$-algebras, namely $\varphi: S\rightarrow \mathcal{R}(I)$, where $ \varphi(X_i) = x_i t $. We also have the natural map $\psi : \mathcal{R}(I ) \to G_I(A)$. Note that \[ \varphi(S_+)=R_+, \quad \psi(R_+) = G_+ \quad \mbox{and} \quad \psi \circ \varphi(S_+)=G_+. \] By graded independence theorem (\cite[13.1.6]{BS60}), it does not matter which ring we use to compute local cohomology. So now onwards, we simply use $H^i(-)$ instead of $H^i_{R_+}(-)$ or $H^i_{G_+}(-)$. \s The natural map $ 0 \rightarrow {I^nM}/{I^{n+1}M} \rightarrow M/{I^{n+1}M} \rightarrow M/{I^{n}M} \rightarrow 0 $ induces the first fundamental exact sequence (as in \cite[(5)]{tp07}) of $\mathcal{R}(I)$-modules: \begin{equation}\label{1st} 0 \longrightarrow G_I(M) \longrightarrow L^I(M) \longrightarrow L^I(M)(-1) \longrightarrow 0. \end{equation}
\s Let $x$ be an $M$-superficial element with respect to $I$. Set $N=M/{xM}$. For every $ n \geqslant 1 $, we have an exact sequence of $A$-modules: \[ 0 \longrightarrow \dfrac{(I^{n+1}M :_M x)}{I^nM} \longrightarrow \dfrac{M}{I^nM} \stackrel{\psi_n}{\longrightarrow} \dfrac{M}{I^{n+1}M} \longrightarrow \dfrac{N}{I^{n+1}N} \longrightarrow 0, \] where $\psi_n(m+I^nM)=xm+I^{n+1}M$ for $ m \in M $. These sequences induce the second fundamental exact sequence (as in \cite[6.2]{tp07}) of $ \mathcal{R}(I) $-modules: \begin{equation}\label{2nd} 0\longrightarrow B^{I}(x,M) \longrightarrow L^I(M)(-1) \stackrel{\Psi_{xt}}{\longrightarrow} L^I(M) \stackrel{\rho}{\longrightarrow} L^{I}(N) \longrightarrow 0, \end{equation} where $\Psi_{xt}$ is multiplication by $ xt \in \mathcal{R}(I)_1 $, and \[ B^{I}(x,M) := \bigoplus_{n \geqslant 0}(I^{n+1}M:_{M}x)/{I^{n}M}. \]
\s\label{pri} It is shown in \cite[Proposition~4.7]{tp07} that if $\operatorname{grade}(I, M) > 0$, then \[ H^0_{R+}(L^I(M)) \cong \bigoplus^{\rho^{I}(M)-1}_{i=0}~ \dfrac{\widetilde{I^{i+1}M}}{I^{i+1}M}. \] \s\label{mod-reg-tony} Let $ x \in I \smallsetminus I^2 $. If $ x^* $ is $G_I(M)$-regular, then $G_I(M)/x^* G_I(M) \cong G_I(M/xM)$ (the proof in \cite[Theorem 7]{hilbert} generalizes in this context).
We now show that $ \operatorname{grade}(G_+, G_I(M)) $ is always bounded by $ \operatorname{grade}(I,M) $.
\begin{lemma}\label{hilbSyz}
We have that $ \operatorname{grade}(G_+, G_I(M)) \leqslant \operatorname{grade}(I,M) $. \end{lemma}
\begin{proof}
We prove the result by induction on $ g := \operatorname{grade}(I,M) $. Let us first consider the case $ g = 0 $. If possible, suppose $ \operatorname{grade}(G_+, G_I(M)) \geqslant 1 $. Then there is a $ G_I(M) $-regular element $ u = x+I^2 \in G_1 $ for some $ x \in I $. Since $ \operatorname{grade}(I,M) = 0 $, $x$ cannot be $M$-regular, i.e., there exists $ a \neq 0 $ in $ M $ such that $ xa = 0 $. By Krull's Intersection Theorem, there exists $ c \geqslant 0 $ such that $ a \in I^{c} M \smallsetminus I^{c+1}M$. Then $ a^* \neq 0 $ in $ I^{c} M / I^{c+1}M $, but $ u a^* = xa + I^{c+2}M = 0 $ yields that $ a^* = 0 $, which is a contradiction. Therefore $ \operatorname{grade}(G_+, G_I(M)) = 0 $.
We assume the result for $ g = l - 1 $, and prove it for $ g = l $ $ (\geqslant 1) $. If possible, suppose that the result is not true for $ g = l $, i.e, $ \operatorname{grade}(I,M) = l $ and $ \operatorname{grade}(G_+, G_I(M)) \geqslant l+1 $. Then there exists a $G_I(M)$-regular sequence $ u_1, \ldots, u_{l+1} \in G_1 $, where $ u_i = x_i + I^2 $ for some $ x_i \in I $, $ 1 \leqslant i \leqslant l+1 $. By applying a similar procedure as above, one obtains that $ x_1 $ is $ M $-regular. We note that $\operatorname{grade}(I,M/{x_1M})=l-1$, but $u_2,\ldots,u_{l+1}$ is regular on $ {G_I(M)}/{x_1^* G_I(M)} \cong G_I(M/{x_1M})$; see \ref{mod-reg-tony}. This contradicts our induction hypothesis. \end{proof}
The result below gives a relationship between the first few local cohomologies of $ L^I(M) $ and that of $ G_I(M) $.
\begin{theorem}\label{jc}
Suppose $\operatorname{grade}(I,M) = g > 0$. Then, for $s \leqslant g-1$, we have $H^{i}(L^I(M)) =0$ for all $0 \leqslant i \leqslant s$ if and only if $H^{i}(G_I(M)) =0$ for all $0 \leqslant i \leqslant s$. \end{theorem}
\begin{proof}
In view of the short exact sequence \eqref{1st} and the corresponding long exact sequence in local cohomology, it follows that if $H^i(L^I(M)) =0$ for $i=0,\ldots,s$, then $H^{i}(G_I(M)) =0$ for $i=0,\ldots,s$. We now prove the converse part by using induction on $s$. For $ s = 0 $, let us assume that $H^{0}(G_I(M)) =0$. Then \eqref{1st} yields an injective graded homomorphism $ H^0(L^I(M)) \hookrightarrow H^0(L^I(M))(-1) $. Hence, in view of Lemma~\ref{op}.(ii), we obtain that $ H^0(L^I(M)) = 0 $.
We now assume the result for $ s = l - 1 $, and prove it for $ s = l $, where $ l \geqslant 1 $. Let $H^{i}(G_I(M))=0$ for $0 \leqslant i \leqslant l$. So $ \operatorname{grade}(G_+,G_I(M)) \geqslant l + 1 $. Then there is $ x \in I \smallsetminus I^2 $ such that $x^*$ is $G_I(M)$-regular. Hence it can be easily shown that $ (I^{n+1}M :_M x) = I^n M $ for all $ n \geqslant 0 $. In particular, we have $ B^I(x,M) = 0 $ and $ x $ is $M$-superficial. Set $ N := M/{xM} $. Note that ${G_I(M)}/{x^*G_I(M)} \cong G_I(N)$ (see \ref{mod-reg-tony}). So $ \operatorname{grade}(G_+,G_I(N)) \geqslant l $, and hence $H^{i}(G_I(N))=0$ for $0\leqslant i\leqslant l-1$. Therefore, by induction hypothesis, we have $H^{j}(L^I(N))=0$ for $0\leqslant j\leqslant l-1$. Since $B^I(x,M)=0$, the short exact sequence \eqref{2nd} and the corresponding long exact sequence in local cohomology provide us the exact sequences:
\begin{equation}\label{uvw}
0 \longrightarrow H^{i}(L^I(M))(-1) \longrightarrow H^{i}(L^I(M)) \quad \mbox{for } 0 \leqslant i \leqslant l.
\end{equation}
In view of Lemma~\ref{hilbSyz}, $ \operatorname{grade}(I,M) \geqslant \operatorname{grade}(G_+, G_I(M)) \geqslant l + 1 $. Hence, by Lemma~\ref{oo}, for every $ 0 \leqslant i \leqslant l $, $ H^i(L^I(M))_n = 0 $ for all $ n \gg 0 $. Therefore it follows from \eqref{uvw} and Lemma~\ref{op}(i) that $H^{i}(L^I(M))=0$ for all $0 \leqslant i \leqslant l$. \end{proof}
As a consequence of Theorem~\ref{jc}, we obtain the following characterization of $ \operatorname{grade}(G_+, G_I(M)) $ in terms of local cohomology of $ L^I(M) $.
\begin{corollary}\label{count-sup}
Suppose $ \operatorname{grade}(I,M) = g > 0 $. Then
\[
\operatorname{grade}(G_+, G_I(M)) = \min\{ i : H^i(L^I(M)) \neq 0, \mbox{ where }0 \leqslant i \leqslant g \}.
\] \end{corollary}
\begin{proof}
It is well-known that
\begin{equation*}
\operatorname{grade}(G_+, G_I(M)) = \min\{ i : H_{G_+}^{i}(G_I(M)) \neq 0 \}.
\end{equation*}
By Lemma~\ref{hilbSyz}, we have $ \operatorname{grade}(G_+, G_I(M)) \leqslant g $. Set
\begin{equation*}
\alpha := \min\{ i : H^i(L^I(M)) \neq 0, \mbox{ where }0 \leqslant i \leqslant g \}.
\end{equation*}
By considering \eqref{1st}, it can be easily observed that $ H^i(L^I(M)) \neq 0 $ for some $ i $ with $ 0 \leqslant i \leqslant \operatorname{grade}(G_+, G_I(M)) $ $ (\leqslant g) $. So $ \alpha \leqslant \operatorname{grade}(G_+, G_I(M)) $. Hence, by virtue of Theorem~\ref{jc}, it follows that $ \alpha = \operatorname{grade}(G_+, G_I(M)) $. \end{proof}
\section{Asymptotic grade for associated graded modules}\label{sec3}
In the present section, we explore the asymptotic behaviour of the associated graded modules for powers of an ideal. We particularly study its grade with respect to the irrelevant ideals of associated graded rings.
Throughout this section, we work with the following hypothesis, but we do not need Cohen-Macaulay assumption everywhere.
\begin{hypothesis}\label{hyp-sec-3}
Let $(A,\mathfrak{m})$ be a Cohen-Macaulay local ring with infinite residue field, and $ M $ be a Cohen-Macaulay $ A $-module. Let $ I $ be an ideal of $ A $ such that $ \operatorname{grade}(I,M) = g > 0 $. \end{hypothesis}
\s[\bf A few invariants]\label{invariants} In our study, we use the following invariants. \begin{enumerate}[{\rm (i)}]
\item
$ \xi_I(M) := \min \{ g, i : H^i(L^I(M))_{-1} \neq 0, \mbox{ or } H^i(L^I(M))_j \neq 0 \mbox{ for infinitely}\\ \mbox{many } j < 0, \mbox{ where $ i $ varies in } 0 \leqslant i \leqslant g-1 \}$.
Note that $ 1 \leqslant \xi_I(M) \leqslant g $.
\item
The {\it amplitude} of $ M $ with respect to $ I $ is defined to be
\[
\operatorname{amp}_{I}(M) := \max \{ |n| : H^i(L^I(M))_{n-1} \neq 0 \mbox{ for some } 0 \leqslant i \leqslant \xi_I(M)-1 \}.
\]
It follows from (i) and Lemma~\ref{oo} that $ \operatorname{amp}_{I}(M) < \infty $.
\item
Let $N$ be a graded module {\rm (}not necessarily finitely generated{\rm )}. Define
\[
\operatorname{end}(N) := \sup\{ n \in \mathbb{Z} : N_n \neq 0 \}.
\]
\item
By Lemma~\ref{oo}, for every $ 0 \leqslant i \leqslant g - 1 $, $ H^i_{R+}(L^I(M))_n = 0 $ for all $ n \gg 0 $. So we set
\[
b^I_i(M) := \operatorname{end}\left(H^i_{R+}(L^I(M))\right) \mbox{ for every } 0 \leqslant i \leqslant g - 1.
\] \end{enumerate}
We start by showing a special property of the first local cohomology of $ L^I(M) $.
\begin{lemma}\label{crucial}
For a fixed integer $ c < 0 $, the following conditions are equivalent:
\begin{enumerate}[\rm (i)]
\item
$ H^1(L^I(M))_c = 0 $.
\item
$ H^1(L^I(M))_j = 0 $ for all $ j \leqslant c $.
\end{enumerate} \end{lemma}
\begin{proof}
We only need to prove (i) $ \Rightarrow $ (ii).
Suppose $ H^1(L^I(M))_c = 0 $. Let $ x $ be an $ M $-superficial element with respect to $ I $. Then $ (I^{n+1}M :_M x) = I^nM $ for every $ n \gg 0 $, i.e, $ B^I(x,M) $ is $ G_+ $ torsion. Therefore $ H^0(B^I(x,M)) = B^I(x,M) $, and $ H^i(B^I(x,M)) = 0 $ for all $ i \geqslant 1 $. Hence, by splitting \eqref{2nd} into two short exact sequences, and considering the corresponding long exact sequences, one obtains the following exact sequence:
\begin{align}
0 \rightarrow B^I(x,M) \longrightarrow & H^0(L^I(M))(-1) \longrightarrow H^0(L^I(M)) \longrightarrow H^0(L^I(N)) \label{les}\\
\longrightarrow & H^1(L^I(M))(-1) \longrightarrow H^1(L^I(M)) \longrightarrow H^1(L^I(N)),\nonumber
\end{align}
where $ N = M/xM $. Therefore, for every $ n < 0 $, since $ H^0(L^I(N))_n = 0 $, we have the following exact sequence:
\[
0 \longrightarrow H^1(L^I(M))_{n-1} \longrightarrow H^1(L^I(M))_n.
\]
Hence, since $ H^1(L^I(M))_c = 0 $, it follows that $ H^1(L^I(M))_j = 0 $ for all $ j \leqslant c $. \end{proof}
In \cite[Theorem~3.4]{Sh99}, Huckaba and Marley proved that if $ A $ is Cohen-Macaulay with $ \dim(A) \geqslant 2 $, and $I$ is a normal ideal with $ \operatorname{grade}(I) \geqslant 1 $, then $ \operatorname{depth}(G_{I^n}(A)) \geqslant 2 $ for all $n\gg0$. A similar result for $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(A)) $ is shown here.
\begin{theorem}\label{RR}
Let $ I $ be a normal ideal of $A$ with $ \operatorname{grade}(I) \geqslant 2 $. Also assume that $A$ is excellent.
Then
\[
\operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(A)) \geqslant 2 \quad \mbox{for all } \; n \gg 0.
\] \end{theorem}
\begin{proof}
Set $ u := \max\{ b^I_0(A), b^I_1(A) \} + 2 $. Let $ l \geqslant u $. We write $ H_{R_+}^i(L^I(A)) = \bigoplus_{n \in \mathbb{Z}} V^i_n $ as it is a graded $\mathcal{R}(I)$-module. It can be observed that $ V_{nl-1}^i = 0 $ for all $ n \geqslant 1 $ and $ i = 0, 1 $. We note that
\begin{align}
H_{\mathcal{R}(I^l)_+}^i\left( L^{I^l}(A) \right) (-1) & \cong H_{\mathcal{R}(I^l)_+}^i\left( L^{I^l}(A)(-1) \right) \nonumber\\
& \cong H_{(R_+)^{<l>}}^i \left( \left( L^I(A)(-1) \right)^{<l>} \right) \mbox{ [by Remark~\ref{rmkk}.(i)]} \label{rmk2.4}\\
& \cong \left( H_{R_+}^i \left( L^I(A)(-1) \right) \right)^{<l>} \mbox{ [by Remark~\ref{rmkk}.(ii)]} \nonumber\\
& \cong \bigoplus_{n \in \mathbb{Z}} V^i_{nl - 1}.\nonumber
\end{align}
Therefore, for every $ i \in \{ 0, 1 \} $, since $ V_{nl-1}^i = 0 $ for all $ n \geqslant 1 $, we have
\begin{equation}\label{h0}
H_{\mathcal{R}(I^l)_+}^i ( L^{I^l}(A) )_n = 0, \mbox{ i.e., }H_{\mathcal{R}(K)_+}^i \big( L^{K}(A) \big)_n = 0 \mbox{ for all } n \geqslant 0,
\end{equation}
where $ K := I^l $. In particular, it follows that $ H_{\mathcal{R}(K)_+}^0(L^K(A)) = 0 $. We now show that $ H_{\mathcal{R}(K)_+}^1(L^K(A)) = 0 $. Note that $K$ is integrally closed. Therefore, by virtue of \cite[Theorem~2.1]{HU14}, after a flat extension, there exists a superficial element $ x \in K $ such that the ideal $ J := K/(x) $ is integrally closed in $ B := A/(x) $. In view of a sequence like \eqref{les}, by applying \eqref{h0}, we obtain that $ H_{\mathcal{R}(K)_+}^0(L^K(B))_n = 0 $ for all $ n \geqslant 1 $. Hence, by \ref{pri}, we have $ H_{\mathcal{R}(K)_+}^0(L^K(B)) \cong \widetilde{J}/J = 0 $ as $ J $ is integrally closed; see \cite[2.3.3]{RR78}. Therefore, for every $ n $, a sequence like \eqref{les} yields the following exact sequence:
\begin{equation}\label{0hh}
0 \longrightarrow H_{\mathcal{R}(K)_+}^1(L^K(A))_{n-1} \longrightarrow H_{\mathcal{R}(K)_+}^1(L^K(A))_{n}.
\end{equation}
Since $ H_{\mathcal{R}(K)_+}^1(L^K(A))_n = 0 $ for all $ n \geqslant 0 $, it can be proved by repeatedly applying \eqref{0hh} that $ H_{\mathcal{R}(K)_+}^1(L^K(A))_n = 0 $ for all $ n $, and hence $ H_{\mathcal{R}(K)_+}^1(L^K(A)) = 0 $. Thus, by virtue of Corollary~\ref{count-sup}, we have that $ \operatorname{grade}(G_{I^l}(A)_+,G_{I^l}(A)) \geqslant 2 $, and this holds true for every $ l \geqslant u $, which completes the proof of the theorem. \end{proof}
\begin{remark}
We have used \cite[Theorem~2.1]{HU14} crucially in the proof above. This is the only place where we need that the ring is excellent. \end{remark}
Since $ \operatorname{grade}(I^n) = \operatorname{grade}(I) $ for every $ n \geqslant 1 $, as an immediate consequence of Theorem~\ref{RR} and Lemma~\ref{hilbSyz}, one obtains the following result.
\begin{corollary}\label{cor-grade-2}
Let $ I $ be a normal ideal of $A$ with $ \operatorname{grade}(I) = 2 $. Also assume that $A$ is excellent. Then
\[
\operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(A)) = 2 \mbox{ for all } n \gg 0.
\] \end{corollary}
The following theorem gives an asymptotic lower bound of grade of associated graded modules for powers of an ideal.
\begin{theorem}\label{1p}
For each $ l > \operatorname{amp}_{I}(M) $, $ \operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M)) \geqslant \xi_{I}(M) $. \end{theorem}
\begin{proof}
Set $ E^i := H^{i}(L^{I}(M)(-1)) $, and $ u := \xi_{I}(M) $. Fix an arbitrary $ l > \operatorname{amp}_{I}(M) $. Also fix $ i $ with $ 0 \leqslant i \leqslant u - 1 $. Then, for $ n \neq 0 $, we have $ E^{i}_{nl} = H^i(L^I(M))_{nl-1} = 0 $ as $ |n| l \geqslant l > \operatorname{amp}_{I}(M) $. Also $ E^{i}_{0} = H^i(L^I(M))_{-1} = 0 $ since $ 0 \leqslant i \leqslant \xi_{I}(M) - 1 $. Hence $(E^{i})^{<l>} = \bigoplus_{n\in \mathbb{Z}} E^{i}_{nl}=0$. So, by Remark~\ref{rmkk} (as in \eqref{rmk2.4}), it follows that
\begin{equation}\label{vero-loc}
H_{\mathcal{R}(I^l)_+}^i \left( L^{I^l}(M)(-1) \right) = \left( H_{R_+}^{i} \left(L^{I}(M)(-1) \right) \right)^{<l>} = 0.
\end{equation}
Therefore $ H_{R(I^l)_+}^i \left( L^{I^l}(M) \right) = 0 $ for all $ 0 \leqslant i \leqslant u - 1 $. Hence, by virtue of Corollary~\ref{count-sup}, $\operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M)) \geqslant u = \xi_{I}(M)$ for all $ l > \operatorname{amp}_{I}(M) $. \end{proof}
The following corollary shows that how the vanishing of a single component of certain local cohomology plays a crucial role in the study of grade of asymptotic associated graded modules.
\begin{corollary}\label{id}
The following conditions are equivalent:
\begin{enumerate}[{\rm (i)}]
\item $ H^1(L^I(M))_{-1} = 0 $.
\item $\operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M)) \geqslant 2 $ for all $ l > \operatorname{amp}_{I}(M) $.
\item $\operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M)) \geqslant 2 $ for some $ l \geqslant 1 $.
\end{enumerate} \end{corollary}
\begin{proof}
(i) $ \Rightarrow $ (ii): Let $ H^1(L^I(M))_{-1} = 0 $. So, by Lemma~\ref{crucial}, $ H^1(L^I(M))_j = 0 $ for all $j\leqslant-1$. Therefore, since $ \xi_{I}(M) \geqslant 1$ (always), it follows that $ \xi_{I}(M) \geqslant 2$. Hence, in view of Theorem~\ref{1p}, $\operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M))\geqslant2$ for all $ l > \operatorname{amp}_{I}(M) $.
(ii) $ \Rightarrow $ (iii): It holds trivially.
(iii) $ \Rightarrow $ (i): Suppose $ \operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M)) \geqslant 2 $ for some $ l \geqslant 1 $. Then it follows from Corollary~\ref{count-sup} that $H_{\mathcal{R}(I^l)_+}^1 \big( L^{I^l}(M) \big) = 0 $. Therefore, as in \eqref{vero-loc}, we obtain that $ H_{R_+}^{i} \left( L^{I}(M)(-1) \right)^{<l>} = 0 $, and hence its $ 0 $th component provides us $ H^1(L^I(M))_{-1} = 0 $. \end{proof}
As a consequence, we obtain the following asymptotic behaviour of associated graded modules for powers of an ideal.
\begin{corollary}\label{thm-xi-1}
Exactly one of the following alternatives must hold true:
\begin{enumerate}[{\rm (i)}]
\item $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(M)) = 1 $ for all $ n > \operatorname{amp}_{I}(M) $.
\item $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(M)) \geqslant 2 $ for all $ n > \operatorname{amp}_{I}(M) $.
\end{enumerate} \end{corollary}
\begin{proof}
Since $ \xi_{I}(M) \geqslant 1 $, by virtue of Theorem~\ref{1p}, $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(M)) \geqslant 1$ for all $ n > \operatorname{amp}_{I}(M) $. Hence the result follows from Corollary~\ref{id}. \end{proof}
Here we prove our main result of this section.
\begin{theorem}\label{thm-xi-2}
With Hypothesis~{\rm \ref{hyp-sec-3}}, suppose $ \operatorname{height}(I) \geqslant \dim(A) - 2 $. Then
\[
\operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M)) = \xi_{I}(M) \mbox{ for every } l > \operatorname{amp}_{I}(M).
\] \end{theorem}
\begin{proof}
Set $ u := \xi_{I}(M) $. By virtue of Theorem~\ref{1p}, $ \operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M)) \geqslant u $ for every $ l > \operatorname{amp}_{I}(M) $. If possible, suppose that $ \operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M)) > u $ for some $ l > \operatorname{amp}_{I}(M) $. Then, in view of Corollary~\ref{count-sup}, $ H_{\mathcal{R}(I^l)_+}^u \big( L^{I^l}(M) \big) = 0 $. Thus, as in \eqref{vero-loc}, we obtain that $ H_{R_+}^u \left( L^{I}(M)(-1) \right)^{<l>} = 0 $, and hence
\begin{equation}\label{hu0}
H^{u}(L^{I}(M))_{nl-1} = 0 \mbox{ for all } n \in \mathbb{Z}.
\end{equation}
We note that $ u < \operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M)) \leqslant g $; see Lemma~\ref{hilbSyz}.
The long exact sequence corresponding to \eqref{1st} provides an exact sequence:
\begin{equation}\label{uuu}
H^{u-1}\left( L^I(M) \right)(-1) \to H^u(G_I(M)) \to H^u(L^I(M)) \to H^u\left( L^I(M) \right)(-1) .
\end{equation}
Since $ u = \xi_{I}(M) $, it follows from the definition of $ \xi_{I}(M) $ that $ H^{u-1}(L^I(M))_n = 0 $ for all $ n \ll 0 $. Therefore \eqref{hu0} and \eqref{uuu} yield that $ H^u(G_I(M))_{nl-1} = 0 $ for all $ n \ll 0 $. Hence, since $ H^u(G_I(M)) $ is tame (due to \cite[Lemma~4.3]{Bb}), there exists some $ c < 0 $ such that $ H^u(G_I(M))_j = 0 $ for all $ j \leqslant c $. (Note that $ \dim(A/I) \leqslant \dim(A) - \operatorname{height}(I) \leqslant 2 $, and $ G_I(M) $ is a finitely generated graded $ G_I(A) $-module).
Since $ H^u(G_I(M))_j = 0 $ for all $ j \leqslant c $, \eqref{uuu} produces an exact sequence
\begin{equation}
0 \longrightarrow H^u(L^I(M))_j \longrightarrow H^u(L^I(M))_{j-1} \mbox{ for every } j \leqslant c.
\end{equation}
Therefore, if $ m, n \leqslant c $ are integers such that $ m \leqslant n $, then $ H^u(L^I(M))_n $ can be considered as a submodule of $ H^u(L^I(M))_m $. Using this fact and \eqref{hu0}, one can prove that $ H^u(L^I(M))_j = 0 $ for all $ j \leqslant n'l $, where $ n' $ is a fixed integer such that $ n'l \leqslant c $. Thus we have $ H^u(L^I(M))_j = 0 $ for all $ j \ll 0 $, and $ H^u(L^I(M))_{-1} = 0 $ by \eqref{hu0}. This contradicts that $ u = \xi_{I}(M) < g $. Therefore $ \operatorname{grade}(G_{I^l}(A)_+, G_{I^l}(M)) = \xi_{I}(M) $ for every $ l > \operatorname{amp}_{I}(M) $. \end{proof}
\section{On the sets $ \operatorname{Ass}^{\infty}_I(M) $ and $ T^\infty_1(I, M) $}\label{sec4}
Let $ (A,\mathfrak{m}) $ be a local ring. Let $ I $ be an ideal of $ A $, and $ M $ be a finitely generated $ A $-module. By a result of Brodmann \cite{Mb0}, there exists $ n_0 $ such that $ \operatorname{Ass}_A(M/I^nM) = \operatorname{Ass}_A(M/I^{n_0}M)$ for all $ n \geqslant n_0 $. The eventual constant set (i.e., $ \operatorname{Ass}_A(M/I^{n_0}M) $) is denoted by $ \operatorname{Ass}^{\infty}_I(M) $. In \cite[Theorem~1]{MS}, Melkersson and Schenzel generalized Brodmann's result by proving that for every fixed $ i \geqslant 0 $, the set $ \operatorname{Ass}_A \left( \operatorname{Tor}^A_i(M, A/I^n) \right) $ is constant for all $ n \gg 0 $. We denote this stable value by $ T^\infty_i(I, M) $. Note that $ \operatorname{Ass}^{\infty}_I(M) $ is nothing but $ T^\infty_0(I, M) $. In this section, we mainly study the question that when does $ \mathfrak{m} \in \operatorname{Ass}^{\infty}_I(M) $ (resp. $ T^\infty_1(I, M) $)? Our first result in this direction is regarding the set $ \operatorname{Ass}^{\infty}_I(M) $.
\begin{theorem}\label{sir1}
Let $(A,\mathfrak{m})$ be a Cohen-Macaulay local, non-regular ring, and $L$ be an MCM $A$-module. Suppose $ M = \operatorname{Syz}^A_1(L) ~(\neq 0) $, and $ M_P $ is free for every $ P \in \operatorname{Spec}(A) \smallsetminus \{ \mathfrak{m} \}$. Let $ I $ be an ideal of $ A $ such that $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(A)) \geqslant 2 $ for all $ n \gg 0 $.
\[
\text{If} \ \mathfrak{m} \notin \operatorname{Ass}^{\infty}_I(M), \text{ then } \operatorname{grade}( G_{I^n}(A)_+, G_{I^n}(M)) \geqslant 2 \text{ for every } n > \operatorname{amp}_I(M).
\] \end{theorem}
\begin{proof}
Since $ L $ is an MCM $ A $-module, every $ A $-regular element is $ L $-regular. By virtue of Lemma~\ref{hilbSyz}, from the given hypotheses, it follows that $ \operatorname{grade}(I,A) > 0 $, and hence $ \operatorname{grade}(I,L) > 0 $. So, by Corollary~\ref{thm-xi-1}, $ \operatorname{grade}( G_{I^n}(A)_+, G_{I^n}(L)) \geqslant 1 $ for all $ n \gg 0 $. Therefore, in view of Corollary~\ref{count-sup}, we obtain that
\begin{equation}\label{0L0}
H_{\mathcal{R}(I^n)_+}^0 \left( L^{I^n}(L) \right) = 0 \quad \mbox{for all } n \gg 0.
\end{equation}
Note that $ M $ is an MCM $ A $-module. So as above $ \operatorname{grade}(I, M) > 0 $.
We have a short exact sequence
\begin{equation}\label{io}
0 \longrightarrow M \longrightarrow F \longrightarrow L \longrightarrow 0,
\end{equation}
where $ F $ is a free $A$-module. For every $ n $, by applying $ (A/{I^n}) \otimes_A - $ on \eqref{io}, we obtain an exact sequence:
\begin{equation}\label{io1}
0 \longrightarrow \operatorname{Tor}^A_1(A/{I^n}, L) \longrightarrow {M}/{I^nM} \longrightarrow {F}/{I^nF} \longrightarrow {L}/{I^nL} \longrightarrow 0.
\end{equation}
For every $ P \in \operatorname{Spec}(A) \smallsetminus \{ \mathfrak{m} \}$, since $ M_P $ is free, we get that $ L_P $ is free, and hence $ \operatorname{Tor}^A_1(A/{I^n}, L)_P = 0$. So $ \operatorname{Ass}_A \left( \operatorname{Tor}^A_1(A/{I^n}, L) \right) \subseteq \{ \mathfrak{m} \} $ for every $ n $. Therefore, since $ \mathfrak{m} \notin \operatorname{Ass}^{\infty}_I (M) $, in view of \eqref{io1}, it can be deduced that $ \operatorname{Ass}_A \left( \operatorname{Tor}^A_1(A/{I^n}, L) \right) = \phi $ (empty set) for all $ n \gg 0 $, and hence there is $ c' \geqslant 1 $ such that $ \operatorname{Tor}^A_1(A/{I^n}, L) = 0 $ for every $ n \geqslant c' $. Thus \eqref{io1} yields an exact sequence:
\begin{equation}\label{io2}
0 \longrightarrow {M}/{I^nM} \longrightarrow {F}/{I^nF} \longrightarrow {L}/{I^nL} \longrightarrow 0
\end{equation}
for every $ n \geqslant c' $. In particular, for every $ n \geqslant c' $, we have short exact sequences:
\begin{equation*}
0 \longrightarrow {M}/{I^{nk}M} \longrightarrow {F}/{I^{nk}F} \longrightarrow {L}/{I^{nk}L} \longrightarrow 0
\end{equation*}
for all $ k \geqslant 1 $, which induce an exact sequence of $ \mathcal{R}(I) $-modules:
\begin{equation}\label{io3}
0\longrightarrow L^{I^n}(M)(-1) \longrightarrow L^{I^n}(F)(-1) \longrightarrow L^{I^n}(L)(-1) \longrightarrow 0.
\end{equation}
The corresponding long exact sequence of local cohomology modules yields
\begin{align}\label{io4}
0 \longrightarrow & H_{\mathcal{R}(I^n)_+}^0 \left( L^{I^n}(M) \right) \longrightarrow H_{\mathcal{R}(I^n)_+}^0 \left( L^{I^n}(F) \right) \longrightarrow H_{\mathcal{R}(I^n)_+}^0 \left( L^{I^n}(L) \right)\\
\longrightarrow & H_{\mathcal{R}(I^n)_+}^1 \left( L^{I^n}(M) \right) \longrightarrow H_{\mathcal{R}(I^n)_+}^1 \left( L^{I^n}(F) \right). \nonumber
\end{align}
Since $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(A)) \geqslant 2 $ for all $ n \gg 0 $, by virtue of Corollary~\ref{count-sup}, we get that $ H_{\mathcal{R}(I^n)_+}^i \left( L^{I^n}(A) \right) = 0 $ for $ i = 0, 1 $, and for all $ n \gg 0 $. Therefore
\begin{equation}\label{io5}
H_{\mathcal{R}(I^n)_+}^0 \left( L^{I^n}(F) \right) = 0 = H_{\mathcal{R}(I^n)_+}^1 \left( L^{I^n}(F) \right) \quad \mbox{for all } n \gg 0.
\end{equation}
It follows from \eqref{0L0}, \eqref{io4} and \eqref{io5} that
\begin{equation*}
H_{\mathcal{R}(I^n)_+}^0 \left( L^{I^n}(M) \right) = 0 = H_{\mathcal{R}(I^n)_+}^1 \left( L^{I^n}(M) \right) \quad \mbox{for all } n \gg 0.
\end{equation*}
Hence, in view of Corollaries~\ref{count-sup} and \ref{id}, $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(M)) \geqslant 2 $ for every $ n > \operatorname{amp}_I(M) $, which completes the proof of the theorem. \end{proof}
We now give
\begin{proof}[Proof of Theorem~\ref{first}]
This follows from Theorem~\ref{sir1} and Theorem~\ref{RR}. \end{proof}
The following result gives a variation of Theorem~\ref{sir1}.
\begin{theorem}\label{var-sir1}
Let $ (A,\mathfrak{m}) $ be a Cohen-Macaulay local ring of dimension $ d \geqslant 3 $. Set $ M := \operatorname{Syz}^A_1(L)$ for some MCM $A$-module $ L $. Let $ I $ be a locally complete intersection ideal of $A$ with $ \operatorname{height}(I) = d - 1 $, the analytic spread $ l(I) = d $, and $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(A)) \geqslant 2 $ for all $ n \gg 0 $. We have that
\[
\text{if} \ \mathfrak{m} \notin \operatorname{Ass}^{\infty}_I(M), \text{ then } \operatorname{grade}( G_{I^n}(A)_+, G_{I^n}(M)) \geqslant 2 \text{ for every } n > \operatorname{amp}_I(M).
\] \end{theorem}
\begin{remark}
See \cite[2.2]{HHF} for cases when the hypotheses on the ideal $ I $ are satisfied. \end{remark}
\begin{proof}[Proof of Theorem~\ref{var-sir1}]
We claim that $ \operatorname{Tor}^A_1(A/{I^n}, L) $ has finite length for all $ n \gg 0 $. To show this, consider $ P \in \operatorname{Spec}(A) \smallsetminus \{ \mathfrak{m} \} $. If $ L_P $ is free, then $ \operatorname{Tor}^A_1(A/{I^n}, L)_P = 0 $. So we may assume that $ L_P $ is not free. If $ I \nsubseteq P $, then also $ \operatorname{Tor}^A_1(A/{I^n}, L)_P = 0 $ for every $ n \geqslant 1 $. So we assume that $ I \subseteq P $. Since $ \operatorname{height}(I) = d - 1 $ and $ P \neq \mathfrak{m} $, we have $ \operatorname{height}(P) = d - 1 $, and hence $ P $ is minimal over $ I $. (So there are finitely many such prime ideals). Note that $ I_P $ is a $ P A_P $-primary ideal of $ A_P $. Since $ I $ is locally complete intersection, $ I_P $ is generated by an $ A_P $-regular sequence of length $ d - 1 $. Therefore, in view of \cite[Remark~20]{hilbert}, we obtain that $ \operatorname{Tor}^A_1(A/{I^n}, L)_P = 0 $ for all $ n \gg 0 $. Hence $ \operatorname{Tor}^A_1(A/{I^n}, L) $ has finite length for all $ n \gg 0 $. Now, along with the same arguments as in the proof of Theorem~\ref{sir1}, it follows that $\operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(M))\geqslant2$ for every $ n > \operatorname{amp}_I(M) $. \end{proof} \s ({\it MCM approximations and an invariant of modules}). Let $(A,\mathfrak{m})$ be a Gorenstein local ring. Consider a finitely generated $ A $-module $ M $. By virtue of \cite[Theorem~A]{AB89}, there is an MCM approximation of $ M $, i.e., a short exact sequence $ s : 0 \rightarrow Y \rightarrow X \rightarrow M \rightarrow 0 $ of $ A $-modules, where $ X $ is MCM and $ Y $ has finite injective dimension (equivalently, $ Y $ has finite projective dimension since $ A $ is Gorenstein). We say that $ X $ is an MCM approximation of $ M $. In view of \cite[Theorem~B]{AB89}, if $ s^\prime \colon 0 \rightarrow Y^\prime \rightarrow X^\prime \rightarrow M \rightarrow 0$ is another MCM approximation of $ M $, then $ X $ and $ X^\prime $ are stably isomorphic, i.e., there exist finitely generated free $ A $-modules $ F $ and $ G $ such that $ X \oplus F \cong X^\prime \oplus G $, and hence $ \operatorname{Syz}^A_1(X) \cong \operatorname{Syz}^A_1(X') $. Thus $ \operatorname{Syz}^A_1(X) $ is an invariant of $ M $. Note that $ \operatorname{Syz}^A_1(X) = 0 $ if and only if $ \operatorname{projdim}_A(M) $ is finite.
We use the following lemma to prove our result on $ T^\infty_1(I, M) $.
\begin{lemma}\label{oi}
Let $ (A,\mathfrak{m}) $ be a Gorenstein local ring of dimension $ d $. Suppose $ A $ has isolated singularity. Let $ I $ be a normal ideal of $ A $ such that $ l(I) < d $. Let $ M $ be a Cohen-Macaulay $ A $-module of dimension $ d - 1 $, and $ \operatorname{projdim}_A(M) = \infty $. Let $ X_M $ be an MCM approximation of $ M $. Then the following statements are equivalent:
\begin{enumerate}[{\rm (i)}]
\item $ \mathfrak{m} \notin T^\infty_1(I, M) $ {\rm(}i.e., $ \mathfrak{m} \notin \operatorname{Ass}_A(\operatorname{Tor}^A_1(M,A/{I^n})) $ for all $ n \gg 0 ${\rm )}.
\item $ \operatorname{Tor}^A_1(X_M, A/{I^n}) = 0 $ for all $ n \gg 0 $.
\end{enumerate} \end{lemma}
\begin{proof}
Since $ X_M $ is an MCM approximation of $ M $, and $ \operatorname{depth}(M) \geqslant d - 1 $, there is a short exact sequence $ 0 \to F \to X_M \to M \to 0 $, where $ F $ is a free $ A $-module. The corresponding long exact sequences of Tor-modules yield an exact sequence
\begin{align}\label{3b}
0 \longrightarrow \operatorname{Tor}^A_1(X_M,A/{I^n}) \longrightarrow \operatorname{Tor}^A_1(M,A/{I^n}) & \longrightarrow \\
F/{I^nF} \longrightarrow {X_M}/{I^nX_M} \longrightarrow M/{I^nM} & \longrightarrow 0 \quad \mbox{for every } n \geqslant 1.\nonumber
\end{align}
(i) $ \Rightarrow $ (ii): Since $ A $ has isolated singularity, it follows that $ (X_M)_P $ is free $ A_P $-module for every $ P \in \operatorname{Spec}(A) \smallsetminus \{ \mathfrak{m} \}$. So $ \operatorname{Tor}^A_1(X_M, A/{I^n}) $ has finite length, and hence $ \operatorname{Ass}_A\left( \operatorname{Tor}^A_1(X_M, A/{I^n}) \right) \subseteq \{ \mathfrak{m} \} $ for every $ n \geqslant 1 $. Therefore, since $ \mathfrak{m} \notin T^\infty_1(I, M) $, in view of \eqref{3b}, we obtain that $ \operatorname{Ass}_A\left( \operatorname{Tor}^A_1(X_M, A/{I^n}) \right) = \phi $ for all $ n \gg 0 $, which implies that $ \operatorname{Tor}^A_1(X_M, A/{I^n}) = 0 $ for all $ n \gg 0 $.
(ii) $ \Rightarrow $ (i): Since $ I $ is normal, and $ l(I) < d $, by virtue of \cite[Proposition~4.1]{Mcada06}, we have $ \mathfrak{m} \notin \operatorname{Ass}^\infty_I(A) $. In view of \eqref{3b}, since $ \operatorname{Tor}^A_1(X_M, A/{I^n}) = 0 $ for all $ n \gg 0 $, it follows that $ T^\infty_1(I, M) \subseteq \operatorname{Ass}^\infty_I(A) $, and hence $ \mathfrak{m} \notin T^\infty_1(I, M) $. \end{proof}
The following theorem provides us a necessary and sufficient condition for `$ \mathfrak{m} \in T^\infty_1(I, M) $' on certain class of ideals and modules over a Gorenstein local ring.
\begin{theorem}\label{sir2}
Let $ (A,\mathfrak{m}) $ be an excellent Gorenstein local ring of dimension $ d $. Suppose $ A $ has isolated singularity. Let $ I $ be a normal ideal of $A$ with $ \operatorname{height}(I) \geqslant 2 $ and $ l(I) < d $. Let $ M $ be a Cohen-Macaulay $ A $-module of dimension $ d - 1 $ and $ \operatorname{projdim}_A(M) = \infty $. Let $ X_M $ be an MCM approximation of $ M $. Set $ N := \operatorname{Syz}^A_1(X_M) $. Then the following statements are equivalent:
\begin{enumerate}[{\rm (i)}]
\item $ \mathfrak{m} \notin T^\infty_1(I, M) $ {\rm(}i.e., $ \mathfrak{m} \notin \operatorname{Ass}_A(\operatorname{Tor}^A_1(M,A/{I^n})) $ for all $ n \gg 0 ${\rm )}.
\item $ \mathfrak{m} \notin \operatorname{Ass}^\infty_I(N) $ {\rm(}equivalently, $ \operatorname{depth}(N/{I^nN}) \geqslant 1 $ for all $ n \gg 0 ${\rm )}.
\end{enumerate}
Furthermore, if this holds true, then $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(N)) \geqslant 2 $ for every $ n > \operatorname{amp}_I(N) $. \end{theorem}
\begin{proof}
Note that $ N = \operatorname{Syz}^A_1(X_M) $ is a non-zero module. We have an exact sequence $ 0 \to N \to G \to X_M \to 0 $, where $ G $ is a free $A$-module. The corresponding long exact sequences of Tor-modules yield an exact sequence (for every $ n \geqslant 1 $):
\begin{align}\label{3bb}
0 \longrightarrow \operatorname{Tor}^A_1(X_M,A/{I^n}) \longrightarrow N/{I^nN} \longrightarrow G/{I^nG} \longrightarrow {X_M}/{I^nX_M} \longrightarrow 0.
\end{align}
(i) $ \Rightarrow $ (ii): Since $ \mathfrak{m} \notin T^\infty_1(I, M) $, by virtue of Lemma~\ref{oi}, $ \operatorname{Tor}^A_1(X_M, A/{I^n}) = 0 $ for all $ n \gg 0 $. Hence \eqref{3bb} yields that $ \operatorname{Ass}^\infty_I(N) \subseteq \operatorname{Ass}^\infty_I(G) = \operatorname{Ass}^\infty_I(A) $. Therefore, since $ \mathfrak{m} \notin \operatorname{Ass}^\infty_I(A) $ (due to \cite[Proposition~4.1]{Mcada06}), we obtain that $ \mathfrak{m} \notin \operatorname{Ass}^\infty_I(N) $.
(ii) $ \Rightarrow $ (i): Since $ A $ has isolated singularity, as in the proof of Lemma~\ref{oi}, it follows that $ \operatorname{Ass}_A\left( \operatorname{Tor}^A_1(X_M, A/{I^n}) \right) \subseteq \{ \mathfrak{m} \} $ for every $ n \geqslant 1 $. Thus, since $ \mathfrak{m} \notin \operatorname{Ass}^\infty_I(N) $, in view of \eqref{3bb}, it can be observed that $ \operatorname{Ass}_A\left( \operatorname{Tor}^A_1(X_M, A/{I^n}) \right) = \phi $ for all $ n \gg 0 $. Equivalently, $ \operatorname{Tor}^A_1(X_M, A/{I^n}) = 0 $ for all $ n \gg 0 $. Hence the implication follows from Lemma~\ref{oi}.
Since $ \operatorname{height}(I) \geqslant 2 $, we have $ \operatorname{grade}(G_{I^n}(A)_+, G_{I^n}(A)) \geqslant 2 $ for all $ n \gg 0 $; see Theorem~\ref{RR}. So the last assertion follows from Theorem~\ref{sir1}. \end{proof}
\section{Asymptotic prime divisors over complete intersections}\label{sec5}
Let $ A $ be a local complete intersection, and $ M $ be a finitely generated $ A $-module. Suppose either $ I $ is a principal ideal or $ I $ has a principal reduction generated by an $ A $-regular element. In this section, we analyze the asymptotic stability of certain associated prime ideals of Tor-modules $ \operatorname{Tor}_i^A(M, A/I^n) $ if both $ i $ and $ n $ tend to $ \infty $.
\subsection{Module structure on Tor}\label{Module structure on Tor}
We first discuss the graded module structure on direct sum of Tor-modules which we are going to use in order to prove our main results on asymptotic prime divisors of Tor-modules.
Let $Q$ be a ring, and ${\bf f} = f_1, \ldots, f_c$ be a $Q$-regular sequence. Set $A := Q/({\bf f})$. Let $M$ and $N$ be finitely generated $A$-modules.
\s Let $ \mathbb{F} : \quad \cdots \rightarrow F_n \rightarrow \cdots \rightarrow F_1 \rightarrow F_0 \rightarrow 0 $ be a free resolution of $M$ by finitely generated free $A$-modules. Let \[ t'_j : \mathbb{F} \longrightarrow \mathbb{F}(-2), \quad 1 \leqslant j \leqslant c \] be the {\it Eisenbud operators} defined by ${\bf f} = f_1, \ldots, f_c$; see \cite[Section~1]{Eis80}. In view of \cite[Corollary~1.4]{Eis80}, the chain maps $t'_j$ are determined uniquely up to homotopy. In particular, they induce well-defined maps \[ t_j : \operatorname{Tor}_i^A(M,N) \longrightarrow \operatorname{Tor}_{i-2}^A(M,N) \] (for all $i \in \mathbb{Z}$ and $j = 1,\ldots,c$) on the homology of $ \mathbb{F} \otimes_A N$. In \cite[Corollary~1.5]{Eis80}, it is shown that the chain maps $t'_j$ ($1 \leqslant j \leqslant c$) commute up to homotopy. Thus \begin{equation*} \operatorname{Tor}_{\star}^A(M,N) := \bigoplus_{i \in \mathbb{Z}} \operatorname{Tor}_{-i}^A(M,N) \end{equation*} turns into a $\mathbb{Z}$-graded $\mathscr{S} := A[t_1,\ldots,t_c]$-module, where $\mathscr{S}$ is the $\mathbb{N}$-graded polynomial ring over $A$ in the operators $t_j$ defined by ${\bf f}$ with $\deg(t_j) = 2$ for all $j = 1,\ldots,c$. Here note that for every $i \in \mathbb{Z}$, the $i$th component of $\operatorname{Tor}_{\star}^A(M,N)$ is $\operatorname{Tor}_{-i}^A(M,N)$. This structure depends only on ${\bf f}$, are natural in both module arguments and commute with the connecting maps induced by short exact sequences.
\subsection{Stability of primes in $ \operatorname{Ass}_A\left( \operatorname{Tor}_i^A(M,N) \right) $}
Here we study the asymptotic stability of certain associated prime ideals of Tor-modules $ \operatorname{Tor}_i^A(M, N) $, $(i \geqslant 0)$, where $ M $ and $ N $ are finitely generated modules over a local complete intersection ring $A$ (see Corollary~\ref{corollary: asymptotic Ass on Tor}).
We denote the collection of all minimal prime ideals in the support of $ M $ by $ \operatorname{Min}_A(M) $ (or simply by $ \operatorname{Min}(M) $). It is well-known that $ \operatorname{Min}(M) \subseteq \operatorname{Ass}_A(M) \subseteq \operatorname{Supp}(M) $. Recall that a local ring $(A,\mathfrak{m})$ is called a {\it complete intersection ring} if its $ \mathfrak{m} $-adic completion $\widehat{A} = Q/({\bf f})$, where $Q$ is a complete regular local ring and ${\bf f} = f_1,\ldots,f_c$ is a $Q$-regular sequence. To prove our results, we may assume that $A$ is complete because of the following well-known fact on associate primes:
\begin{lemma}\label{lemma: Ass: Completion}
For an $A$-module $M$, we have
\[
\operatorname{Ass}_A(M) = \left\{ \mathfrak{q} \cap A : \mathfrak{q} \in \operatorname{Ass}_{\widehat{A}} \left( M \otimes_A \widehat{A} \right) \right\}.
\] \end{lemma} It is now enough to prove our result with the following hypothesis:
\begin{hypothesis}\label{hypothesis 1}
Let $A = Q/(f_1,\ldots,f_c)$, where $Q$ is a regular local ring, and $ f_1,\ldots,f_c $ is a $Q$-regular sequence. Let $M$ and $N$ be finitely generated $A$-modules. \end{hypothesis}
We show our result with the following more general hypothesis:
\begin{hypothesis}\label{hypothesis 2}
Let $A = Q/\mathfrak{a}$, where $Q$ is a regular ring of finite Krull dimension, and $ \mathfrak{a} \subseteq Q $ is an ideal such that $ \mathfrak{a}_{\mathfrak{q}} \subseteq Q_{\mathfrak{q}} $ is generated by a $ Q_{\mathfrak{q}} $-regular sequence for every $ \mathfrak{q} \in \operatorname{Var}(\mathfrak{a}) $. Let $M$ and $N$ be finitely generated $A$-modules. \end{hypothesis}
It should be noticed that a ring $ A $ satisfies Hypothesis~\ref{hypothesis 1} implies that $ A $ satisfies Hypothesis~\ref{hypothesis 2}. With the Hypothesis~\ref{hypothesis 2}, we have the following well-known bounds for complete intersection dimension and complexity: \begin{align*} \operatorname{CI-dim}_A(M) & = \max\{ \operatorname{CI-dim}_{A_{\mathfrak{m}}}(M_{\mathfrak{m}}) : \mathfrak{m} \in \operatorname{Max}(A) \} \quad \mbox{[by definition]} \\ & \leqslant \max\{ \dim(A_{\mathfrak{m}}) : \mathfrak{m} \in \operatorname{Max}(A) \} \quad \mbox{[see, e.g., \cite[4.1.5]{AB00}]} \\ & = \dim(A); \\ \operatorname{cx}_A(M) & = \max\{ \operatorname{cx}_{A_{\mathfrak{m}}}( M_{\mathfrak{m}} ) : \mathfrak{m} \in \operatorname{Max}(A) \} \quad \mbox{[by definition]} \\ & \leqslant \max\{ \operatorname{codim}(A_{\mathfrak{m}}) : \mathfrak{m} \in \operatorname{Max}(A) \} \quad \mbox{[see, e.g., \cite[1.4]{AB00}]} \\ & \leqslant \dim(Q). \end{align*} Therefore, by \cite[Theorem~4.9]{AB00}, we have the following result: \begin{theorem}\label{theorem: vanishing of Tor}
With the {\rm Hypothesis~\ref{hypothesis 2}}, the following statements are equivalent:
\begin{enumerate}[{\rm (1)}]
\item $ \operatorname{Tor}_i^A(M,N) = 0 $ for $ \dim(Q) + 1 $ consecutive values of $ i > \dim(A) $;
\item $ \operatorname{Tor}_i^A(M,N) = 0 $ for all $ i \gg 0 $;
\item $ \operatorname{Tor}_i^A(M,N) = 0 $ for all $ i > \dim(A) $.
\end{enumerate} \end{theorem}
Let us recall the following asymptotic behaviour of Tor-modules.
\begin{lemma}\cite[Theorem~3.1]{G} \label{lemma: Dao, *Artinian}
With the {\rm Hypothesis~\ref{hypothesis 1}}, if $ \lambda_A\left( \operatorname{Tor}_i^A(M,N) \right) $ is finite for all $ i \gg 0 $ {\rm (}where $ \lambda_A(-) $ is the length function{\rm )}, then
\[
\bigoplus_{i \ll 0} \operatorname{Tor}_{-i}^A(M,N) \quad\mbox{is a *Artinian graded $A[t_1,\ldots,t_c]$-module},
\]
where $ \deg(t_j) = 2 $ for all $ j = 1,\ldots,c $. \end{lemma}
As a consequence of this lemma, we obtain the following result:
\begin{proposition}\label{proposition: Tor: polynomials}
Let $ A $ be a local complete intersection ring. Let $ M $ and $ N $ be finitely generated $ A $-modules. If $ \lambda_A\left( \operatorname{Tor}_i^A(M,N) \right) $ is finite for all sufficiently large integer $ i $, then we have that
\[
\lambda_A\left( \operatorname{Tor}_{2i}^A(M,N) \right) \;\;\mbox{ and }\;\; \lambda_A\left( \operatorname{Tor}_{2i + 1}^A(M,N) \right)
\]
are given by polynomials in $ i $ over $ \mathbb{Q} $ for all sufficiently large integer $ i $. \end{proposition}
\begin{proof}
Without loss of generality, we may assume that $A$ is complete. Then the proposition follows from Lemmas~\ref{lemma: Dao, *Artinian} and \ref{lemma: *Artinian, Hilbert function}. \end{proof}
The following result is a consequence of the graded version of Matlis duality and the Hilbert-Serre Theorem. It might be known for the experts. But we give a proof here for the reader's convenience.
\begin{lemma}\label{lemma: *Artinian, Hilbert function}
Let $ (A,\mathfrak{m}) $ be a complete local ring. Let $ L = \bigoplus_{i \in \mathbb{Z}} L_i $ be a *Artinian graded $A[t_1,\ldots,t_c]$-module, where $ \deg(t_j) = 2 $ for all $ 1 \leqslant j \leqslant c $, and $ \lambda_A(L_i) $ is finite for all $ i \ll 0 $. Then $ \lambda_A(L_{-2i}) $ and $ \lambda_A(L_{-2i - 1}) $ are given by polynomials in $ i $ over $ \mathbb{Q} $ for all sufficiently large $ i $. \end{lemma}
\begin{proof}
We use the graded Matlis duality. Let us recall the following definitions: *complete from \cite[p.\,142]{BH98}; *local from \cite[p.\,139]{BH98}; and $ \mbox{*}\operatorname{Hom}(-,-) $ from \cite[p.\,33]{BH98}. Note that $A[t_1,\ldots,t_c]$ is a Noetherian *complete *local ring. We set $ E := E_A(A/\mathfrak{m}) $, the injective hull of $ A/\mathfrak{m} $. Also set $ L^{\vee} := \mbox{*}\operatorname{Hom}(L,E) $. Notice that $ (L^{\vee})_i = \operatorname{Hom}_A(L_{-i},E) $ for all $ i \in \mathbb{Z} $.
Since $A[t_1,\ldots,t_c]$ is a Noetherian *complete *local ring, by virtue of Matlis duality for graded modules (\cite[3.6.17]{BH98}), we obtain that $ L^{\vee} $ is a finitely generated graded $A[t_1,\ldots,t_c]$-module. Let $ i_0 $ be such that $ \lambda_A(L_{-i}) $ is finite for all $ i \geqslant i_0 $. Hence
\begin{equation}\label{lemma: *Artinian, Hilbert function: equation 1}
\lambda_A\left( (L^{\vee})_i \right) = \lambda_A\left( \operatorname{Hom}_A(L_{-i},E) \right) = \lambda_A( L_{-i} )
\end{equation}
is finite for all $ i \geqslant i_0 $; see, e.g., \cite[3.2.12]{BH98}. Since $ \bigoplus_{i \geqslant i_0} (L^{\vee})_i $ is a graded $A[t_1,\ldots,t_c]$-submodule of $ L^{\vee} $, we have that $ \bigoplus_{i \geqslant i_0} (L^{\vee})_i $ is a finitely generated graded module over $A[t_1,\ldots,t_c]$. Therefore, by the Hilbert-Serre Theorem, we obtain that $ \lambda_A\left( (L^{\vee})_{2i} \right) $ and $ \lambda_A\left( (L^{\vee})_{2i+1} \right) $ are given by polynomials in $ i $ over $ \mathbb{Q} $ for all $ i \gg 0 $, and hence the lemma follows from \eqref{lemma: *Artinian, Hilbert function: equation 1}. \end{proof}
We are now in a position to prove our main result of this section. \begin{theorem}\label{theorem: asymptotic Ass on Tor}
With the {\rm Hypothesis~\ref{hypothesis 2}}, exactly one of the following alternatives must hold:
\begin{enumerate}[{\rm (1)}]
\item $ \operatorname{Tor}_i^A(M, N) = 0 $ for all $ i > \dim(A) $;
\item There exists a non-empty finite subset $ \mathcal{A} $ of $ \operatorname{Spec}(A) $ such that for every $ \mathfrak{p} \in \mathcal{A} $, at least one of the following statements holds true:
\begin{enumerate}[{\rm (i)}]
\item $ \mathfrak{p} \in \operatorname{Min}\left( \operatorname{Tor}_{2i}^A(M, N) \right) $ for all $ i \gg 0 $;
\item $ \mathfrak{p} \in \operatorname{Min}\left( \operatorname{Tor}_{2i+1}^A(M, N) \right) $ for all $ i \gg 0 $.
\end{enumerate}
\end{enumerate} \end{theorem} \begin{proof}
We set
\[
\mathcal{B} := \bigcup\left\{ \operatorname{Supp}\left( \operatorname{Tor}_i^A(M,N) \right) : \dim(A) < i \leqslant \dim(A) + \dim(Q) + 1 \right\}.
\]
If $ \mathcal{B} = \phi $ (empty set), then $ \operatorname{Tor}_i^A(M,N) = 0 $ for all $ \dim(A) < i \leqslant \dim(A) + \dim(Q) + 1 $, and hence, by virtue of Theorem~\ref{theorem: vanishing of Tor}, we get that $ \operatorname{Tor}_i^A(M,N) = 0 $ for all $ i > \dim(A) $. So we may assume that $ \mathcal{B} \neq \phi $. In this case, we prove that the statement (2) holds true. We denote the collection of minimal primes in $ \mathcal{B} $ by $ \mathcal{A} $, i.e.,
\begin{equation}\label{theorem: asymptotic Ass on Tor: equation 1}
\mathcal{A} := \left\{ \mathfrak{p} \in \mathcal{B} : \mathfrak{q} \in \operatorname{Spec}(A) \mbox{ and } \mathfrak{q} \subsetneq \mathfrak{p} \Rightarrow \mathfrak{q} \notin \mathcal{B} \right\}.
\end{equation}
Clearly, $ \mathcal{A} $ is a non-empty finite subset of $ \operatorname{Spec}(A) $. We claim that $ \mathcal{A} $ satisfies statement (2) in the theorem. To prove this claim, let us fix an arbitrary $ \mathfrak{p} \in \mathcal{A} $.
If $ \mathfrak{q} \in \operatorname{Spec}(A) $ be such that $ \mathfrak{q} \subsetneq \mathfrak{p} $, then $ \mathfrak{q} \notin \mathcal{B} $, i.e., $ \operatorname{Tor}_i^{A_{\mathfrak{q}}}(M_{\mathfrak{q}}, N_{\mathfrak{q}}) = 0 $ for all $ \dim(A) < i \leqslant \dim(A) + \dim(Q) + 1 $, and hence, in view of Theorem~\ref{theorem: vanishing of Tor}, we obtain that $ \operatorname{Tor}_i^{A_{\mathfrak{q}}}(M_{\mathfrak{q}}, N_{\mathfrak{q}}) = 0 $ for all $ i > \dim(A) $. Therefore
\begin{equation}\label{theorem: asymptotic Ass on Tor: equation 2}
\operatorname{Supp}\left( \operatorname{Tor}_i^{A_{\mathfrak{p}}}(M_{\mathfrak{p}}, N_{\mathfrak{p}}) \right) \subseteq \left\{ \mathfrak{p} A_{\mathfrak{p}} \right\} \quad \mbox{for all } i > \dim(A),
\end{equation}
which gives
\begin{equation}\label{theorem: asymptotic Ass on Tor: equation 3}
\lambda_{A_{\mathfrak{p}}}\left( \operatorname{Tor}_i^{A_{\mathfrak{p}}}(M_{\mathfrak{p}}, N_{\mathfrak{p}}) \right) \quad \mbox{is finite for all } i > \dim(A).
\end{equation}
Since $ A_{\mathfrak{p}} $ satisfies Hypothesis~\ref{hypothesis 1}, by Proposition~\ref{proposition: Tor: polynomials}, there are polynomials $ P_1(z) $ and $ P_2(z) $ in $ z $ over $ \mathbb{Q} $ such that
\begin{align}
\lambda_{A_{\mathfrak{p}}}\left( \operatorname{Tor}_{2i}^{A_{\mathfrak{p}}}(M_{\mathfrak{p}}, N_{\mathfrak{p}}) \right) & = P_1(i) \quad\mbox{for all } i \gg 0;\label{theorem: asymptotic Ass on Tor: equation 4}\\
\lambda_{A_{\mathfrak{p}}}\left( \operatorname{Tor}_{2i+1}^{A_{\mathfrak{p}}} (M_{\mathfrak{p}}, N_{\mathfrak{p}}) \right) & = P_2(i) \quad\mbox{for all } i \gg 0.\label{theorem: asymptotic Ass on Tor: equation 5}
\end{align}
We now show that both $ P_1(z) $ and $ P_2(z) $ cannot be zero polynomials. If this is not the case, then we have
\[
\operatorname{Tor}_i^{A_{\mathfrak{p}}}(M_{\mathfrak{p}}, N_{\mathfrak{p}}) = 0 \quad \mbox{for all } i \gg 0,
\]
which yields (by Theorem~\ref{theorem: vanishing of Tor}) that
\[
\operatorname{Tor}_i^{A_{\mathfrak{p}}}(M_{\mathfrak{p}}, N_{\mathfrak{p}}) = 0 \quad \mbox{for all } i > \dim(A),
\]
i.e., $ \mathfrak{p} \notin \mathcal{B} $, and hence $ \mathfrak{p} \notin \mathcal{A} $, which is a contradiction. Therefore at least one of $ P_1 $ and $ P_2 $ must be a non-zero polynomial.
Assume that $ P_1 $ is a non-zero polynomial. Then $ P_1 $ may have only finitely many roots. Therefore $ P_1(i) \neq 0 $ for all $ i \gg 0 $, which yields $ \operatorname{Tor}_{2i}^{A_{\mathfrak{p}}}(M_{\mathfrak{p}}, N_{\mathfrak{p}}) \neq 0 $ for all $ i \gg 0 $. So, in view of \eqref{theorem: asymptotic Ass on Tor: equation 2}, we obtain that
\[
\operatorname{Supp}\left( \operatorname{Tor}_i^{A_{\mathfrak{p}}}(M_{\mathfrak{p}}, N_{\mathfrak{p}}) \right) = \left\{ \mathfrak{p} A_{\mathfrak{p}} \right\} \quad \mbox{for all } i \gg 0,
\]
which implies that $ \mathfrak{p} \in \operatorname{Min}\left( \operatorname{Tor}_{2i}^A(M, N) \right) $ for all $ i \gg 0 $.
Similarly, if $ P_2 $ is a non-zero polynomial, then we have that
\[
\mathfrak{p} \in \operatorname{Min}\left( \operatorname{Tor}_{2i+1}^A(M, N) \right) \quad\mbox{for all } i \gg 0.
\]
This completes the proof of the theorem. \end{proof} As a corollary of this theorem, we obtain the following result on associate primes.
\begin{corollary}\label{corollary: asymptotic Ass on Tor}
Let $ A $ be a local complete intersection ring. Let $ M $ and $ N $ be finitely generated $ A $-modules. Then exactly one of the following alternatives must hold:
\begin{enumerate}[{\rm (1)}]
\item $ \operatorname{Tor}_i^A(M, N) = 0 $ for all $ i > \dim(A) $;
\item There exists a non-empty finite subset $ \mathcal{A} $ of $ \operatorname{Spec}(A) $ such that for every $ \mathfrak{p} \in \mathcal{A} $, at least one of the following statements holds true:
\begin{enumerate}[{\rm (i)}]
\item $ \mathfrak{p} \in \operatorname{Ass}_A\left( \operatorname{Tor}_{2i}^A(M, N) \right) $ for all $ i \gg 0 $;
\item $ \mathfrak{p} \in \operatorname{Ass}_A\left( \operatorname{Tor}_{2i+1}^A(M, N) \right) $ for all $ i \gg 0 $.
\end{enumerate}
\end{enumerate} \end{corollary} \begin{proof}
For every finitely generated $ A $-module $ D $, we have $ \operatorname{Min}_A(D) \subseteq \operatorname{Ass}_A(D) $. Therefore, if $ A $ is complete, then the corollary follows from Theorem~\ref{theorem: asymptotic Ass on Tor}. Now the general case can be deduced by using Lemma~\ref{lemma: Ass: Completion}. \end{proof}
Here we give an example which shows that both statements (i) and (ii) in the assertion (2) of Corollary~\ref{corollary: asymptotic Ass on Tor} might not hold together.
\begin{example}\label{example: two sets of stable values: Ass: Tor}
Let $ Q = k[[u,x]] $ be a ring of formal power series in variables $ u $ and $ x $ over a field $k$. We set $ A := Q/(ux) $ and $ M = N := Q/(u) $. Clearly, $ A $ is a local complete intersection ring, and $M$, $N$ are $A$-modules. Then, for every $ i \geqslant 1 $, we have that $ \operatorname{Tor}_{2i}^A(M, N) = 0 $ and $ \operatorname{Tor}_{2i-1}^A(M, N) \cong k $; see \cite[Example~4.3]{AB00}. So, for all $ i \geqslant 1 $, we obtain that
\begin{align*}
& \operatorname{Ass}_A\left( \operatorname{Tor}_{2i}^A(M, N) \right) = \phi \quad\mbox{and} \\
& \operatorname{Ass}_A\left( \operatorname{Tor}_{2i-1}^A(M, N) \right) = \operatorname{Ass}_A(k) = \{ (u,x)/(ux) \}.
\end{align*} \end{example}
\subsection{Stability of primes in $ \operatorname{Ass}_A\left( \operatorname{Tor}_i^A(M,A/I^n) \right) $}
We now study the asymptotic stability of certain associated prime ideals of Tor-modules $ \operatorname{Tor}_i^A(M, A/I^n) $, $(i, n \geqslant 0)$, where $M$ is a finitely generated module over a local complete intersection ring $A$, and either $ I $ is a principal ideal or $ I $ has a principal reduction generated by an $ A $-regular element (see Corollary~\ref{corollary: asymptotic ass: Tor: for special ideals}). We start with the following lemma which we use in order to prove our result when $ I $ is a principal ideal.
\begin{lemma}\label{lemma: Tor: principal ideal}
Let $ A $ be a ring, and $ M $ be an $ A $-module. Fix an element $ a \in A $. Then there exist an ideal $ J $ of $ A $ and a positive integer $ n_0 $ such that
\[
\operatorname{Tor}_{i + 2}^A\big(M, A/(a^n)\big) \cong \operatorname{Tor}_i^A(M, J) \quad \mbox{for all } i \geqslant 1 \mbox{ and } n \geqslant n_0.
\] \end{lemma}
\begin{proof}
For every integer $ n \geqslant 1 $, we consider the following short exact sequence:
\[
0 \to (0 :_A a^n) \to A \to (a^n) \to 0,
\]
which yields the following isomorphisms:
\begin{equation}\label{lemma: Tor: principal ideal: equation 1}
\operatorname{Tor}_{i + 1}^A\big(M, (a^n)\big) \cong \operatorname{Tor}_i^A\big(M, (0 :_A a^n)\big) \quad \mbox{for all } i \geqslant 1.
\end{equation}
For every $ n \geqslant 1 $, the short exact sequence $ 0 \to (a^n) \to A \to A/(a^n) \to 0 $ gives
\begin{equation}\label{lemma: Tor: principal ideal: equation 2}
\operatorname{Tor}_{i + 1}^A\big(M, A/(a^n)\big) \cong \operatorname{Tor}_i^A\big(M, (a^n)\big) \quad \mbox{for all } i \geqslant 1.
\end{equation}
Thus \eqref{lemma: Tor: principal ideal: equation 1} and \eqref{lemma: Tor: principal ideal: equation 2} together yield
\begin{equation}\label{lemma: Tor: principal ideal: equation 3}
\operatorname{Tor}_{i + 2}^A\big(M, A/(a^n)\big) \cong \operatorname{Tor}_i^A\big(M, (0 :_A a^n)\big) \quad \mbox{for all } i \geqslant 1.
\end{equation}
Since $ A $ is a Noetherian ring, the ascending chain of ideals
\[
\big(0 :_A a\big) \subseteq \big(0 :_A a^2\big) \subseteq \big(0 :_A a^3\big) \subseteq \cdots
\]
will stabilize somewhere, i.e., there exists a positive integer $ n_0 $ such that
\begin{equation}\label{lemma: Tor: principal ideal: equation 4}
\big(0 :_A a^n\big) = \big(0 :_A a^{n_0}\big) \quad \mbox{for all } n \geqslant n_0.
\end{equation}
Then the lemma follows from \eqref{lemma: Tor: principal ideal: equation 3} and \eqref{lemma: Tor: principal ideal: equation 4} by setting $ J := \left(0 :_A a^{n_0}\right) $. \end{proof}
Here is another lemma which we use in order to prove our result when $ I $ has a principal reduction generated by an $ A $-regular element.
\begin{lemma}\label{lemma: Tor: I has a principal reduction ideal}
Let $ A $ be a ring. Let $ I $ be an ideal of $ A $ having a principal reduction generated by an $ A $-regular element. Then there exist an ideal $ J $ of $ A $ and a positive integer $ n_0 $ such that
\[
\operatorname{Tor}_i^A\big(M, A/I^n\big) \cong \operatorname{Tor}_i^A(M, A/J) \quad \mbox{for all } i \geqslant 2 \mbox{ and } n \geqslant n_0.
\] \end{lemma}
\begin{proof}
Since $ I $ has a principal reduction generated by an $ A $-regular element, there exist an $ A $-regular element $ y $ and a positive integer $ n_0 $ such that
\[
I^{n+1} = y I^n ~\mbox{ for all } n \geqslant n_0.
\]
Then it can be shown that for every $ n \geqslant n_0 $, we obtain a short exact sequence:
\begin{equation}\label{lemma: Tor: I has a principal reduction ideal: equation 1}
0 \longrightarrow A/I^n \stackrel{y\cdot}{\longrightarrow} A/I^{n+1} \longrightarrow A/(y) \longrightarrow 0.
\end{equation}
Now note that $ \operatorname{Tor}_i^A\big(M, A/(y)\big) = 0 $ for all $ i \geqslant 2 $. Therefore the short exact sequence \eqref{lemma: Tor: I has a principal reduction ideal: equation 1} yields
\[
\operatorname{Tor}_i^A\big(M, A/I^n\big) \cong \operatorname{Tor}_i^A\big(M, A/I^{n+1}\big) \quad \mbox{for all } i \geqslant 2 \mbox{ and } n \geqslant n_0.
\]
Hence the lemma follows by setting $ J := I^{n_0} $. \end{proof}
Now we can achieve one of the main goals of this article.
\begin{theorem}\label{theorem: asymptotic ass: Tor: for special ideals}
Let $ A $ be as in {\rm Hypothesis~\ref{hypothesis 2}}. Let $ M $ be a finitely generated $ A $-module, and $ I $ be an ideal of $ A $. Suppose either $ I $ is principal or $ I $ has a principal reduction generated by an $ A $-regular element. Then there exist positive integers $ i_0 $ and $ n_0 $ such that exactly one of the following alternatives must hold:
\begin{enumerate}[{\rm (1)}]
\item $ \operatorname{Tor}_i^A(M, A/I^n) = 0 $ for all $ i \geqslant i_0 $ and $ n \geqslant n_0 $;
\item There exists a non-empty finite subset $ \mathcal{A} $ of $ \operatorname{Spec}(A) $ such that for every $ \mathfrak{p} \in \mathcal{A} $, at least one of the following statements holds true:
\begin{enumerate}[{\rm (i)}]
\item $ \mathfrak{p} \in \operatorname{Min}\left( \operatorname{Tor}_{2i}^A(M, A/I^n) \right) $ for all $ i \geqslant i_0 $ and $ n \geqslant n_0 $;
\item $ \mathfrak{p} \in \operatorname{Min}\left( \operatorname{Tor}_{2i+1}^A(M, A/I^n) \right) $ for all $ i \geqslant i_0 $ and $ n \geqslant n_0 $.
\end{enumerate}
\end{enumerate} \end{theorem}
\begin{proof}
If $ I $ is a principal ideal, then the result follows from Theorem~\ref{theorem: asymptotic Ass on Tor} and Lemma~\ref{lemma: Tor: principal ideal}. In another case, i.e., when $ I $ has a principal reduction generated by an $ A $-regular element, then we use Theorem~\ref{theorem: asymptotic Ass on Tor} and Lemma~\ref{lemma: Tor: I has a principal reduction ideal} to get the desired result of the theorem. \end{proof}
As an immediate corollary of this theorem, we obtain the following:
\begin{corollary}\label{corollary: asymptotic ass: Tor: for special ideals}
Let $ A $ be a local complete intersection ring. Let $ M $ be a finitely generated $ A $-module, and $ I $ be an ideal of $ A $. Suppose either $ I $ is principal or $ I $ has a principal reduction generated by an $ A $-regular element. Then there exist positive integers $ i_0 $ and $ n_0 $ such that exactly one of the following alternatives must hold:
\begin{enumerate}[{\rm (1)}]
\item $ \operatorname{Tor}_i^A(M, A/I^n) = 0 $ for all $ i \geqslant i_0 $ and $ n \geqslant n_0 $;
\item There exists a non-empty finite subset $ \mathcal{A} $ of $ \operatorname{Spec}(A) $ such that for every $ \mathfrak{p} \in \mathcal{A} $, at least one of the following statements holds true:
\begin{enumerate}[{\rm (i)}]
\item $ \mathfrak{p} \in \operatorname{Ass}_A\left( \operatorname{Tor}_{2i}^A(M, A/I^n) \right) $ for all $ i \geqslant i_0 $ and $ n \geqslant n_0 $;
\item $ \mathfrak{p} \in \operatorname{Ass}_A\left( \operatorname{Tor}_{2i+1}^A(M, A/I^n) \right) $ for all $ i \geqslant i_0 $ and $ n \geqslant n_0 $.
\end{enumerate}
\end{enumerate} \end{corollary}
\begin{proof}
Since $ \operatorname{Min}_A(D) \subseteq \operatorname{Ass}_A(D) $ for every finitely generated $ A $-module $ D $, the corollary follows from Theorem~\ref{theorem: asymptotic ass: Tor: for special ideals} when $ A $ is complete.
Now note that if $ I $ is principal, then so is its completion $ \widehat{I} $. Also note that if $ I $ has a principal reduction generated by an $ A $-regular element, then $ \widehat{I} $ has a principal reduction generated by an $ \widehat{A} $-regular element. It is well-known that
\[
\operatorname{Tor}_i^A(M, A/I^n) \otimes_A \widehat{A} ~\cong \operatorname{Tor}_i^{\widehat{A}} \left( \widehat{M}, \widehat{A}/{(\widehat{I})}^n \right) \quad \mbox{for all } i, n \geqslant 0.
\]
Therefore the general case can be easily deduced by using Lemma~\ref{lemma: Ass: Completion}. \end{proof}
\end{document} | arXiv |
Volume 19 Supplement 10
Proceedings of the 29th International Conference on Genome Informatics (GIW 2018): genomics
Revealing transcription factor and histone modification co-localization and dynamics across cell lines by integrating ChIP-seq and RNA-seq data
Lirong Zhang1,
Gaogao Xue1,
Junjie Liu1,
Qianzhong Li1 &
Yong Wang2,3,4
Interactions among transcription factors (TFs) and histone modifications (HMs) play an important role in the precise regulation of gene expression. The context specificity of those interactions and further its dynamics in normal and disease remains largely unknown. Recent development in genomics technology enables transcription profiling by RNA-seq and protein's binding profiling by ChIP-seq. Integrative analysis of the two types of data allows us to investigate TFs and HMs interactions both from the genome co-localization and downstream target gene expression.
We propose a integrative pipeline to explore the co-localization of 55 TFs and 11 HMs and its dynamics in human GM12878 and K562 by matched ChIP-seq and RNA-seq data from ENCODE. We classify TFs and HMs into three types based on their binding enrichment around transcription start site (TSS). Then a set of statistical indexes are proposed to characterize the TF-TF and TF-HM co-localizations. We found that Rad21, SMC3, and CTCF co-localized across five cell lines. High resolution Hi-C data in GM12878 shows that they associate most of the Hi-C peak loci with a specific CTCF-motif "anchor" and supports that CTCF, SMC3, and RAD2 co-localization serves important role in 3D chromatin structure. Meanwhile, 17 TF-TF pairs are highly dynamic between GM12878 and K562. We then build SVM models to correlate high and low expression level of target genes with TF binding and HM strength. We found that H3k9ac, H3k27ac, and three TFs (ELF1, TAF1, and POL2) are predictive with the accuracy about 85~92%.
We propose a pipeline to analyze the co-localization of TF and HM and their dynamics across cell lines from ChIP-seq, and investigate their regulatory potency by RNA-seq. The integrative analysis of two level data reveals new insight for the cooperation of TFs and HMs and is helpful in understanding cell line specificity of TF/HM interactions.
Gene expression is known to be regulated by transcription factors (TFs) and histone modifications (HMs). To achieve precise regulation, those regulatory factors often work in a cooperative way. Physically, TFs and HMs tend to localize together at regulatory elements (promoter, enhancer, or insulator) in genome to achieve complex and accurate regulation of target genes [1,2,3,4]. For example, the initiation of transcription involves many protein-protein interactions among transcription factors, which bind to the promoter or enhancer and stabilize RNA polymerase [5,6,7]. In addition, recent studies have shown that histone modifications play significant regulation roles in the process of transcriptional initiation and elongation by interacting with transcription factors [8, 9]. Therefore, co-localization among TFs binding and HMs is critically important for understanding the precise control of gene expression [10, 11].
In general, there are two information sources useful to infer the cooperation among TFs and HMs. One is used to check the downstream effect on expression level of their target gene, which can be easily measured by microarray and RNA-seq. Previous studies have shown that TFs binding and HMs are predictive for gene expression in some model organisms [12, 13]. They found that histone modification levels and gene expression are very well correlated and only a small number of HMs are necessary to accurately predict gene expression in human CD4+ T-cells [14]. Using a Bayesian network, causal and combinatorial relationships among HMs and gene expression were investigated and some known relationships were confirmed [15]. Another information source is used to check the co-localization of TFs and HMs in chromatin, which can be measured by ChIP-seq technology [10]. Recently, Xie et al. [16] analyzed TF co-localization in human cells by a self-organizing map and revealed many interesting TF-TF associations and extensive change across cell lines. Furthermore, Zhang et al. [17] took long-range interactions into account and developed a new tool, named 3CPET, to infer the probable protein complexes in maintaining chromatin interactions. Taken together, a number of studies proved that TF/HMs' cooperative interaction is important and can be investigated from various levels.
Here we argue that the localization data and downstream gene expression level should be integrated to predict high quality TF/HM interactions,because gene expression measured the results of TF/HM interactions while the upstream TF/HMs' co-localization in genome provides the causal explanation for the effect. Integration of the two information sources, the direct co-localization in chromatin and the indirect effect on gene expression, is necessary and holds the promise to improve inference accuracy. With this solid base, the detailed interaction among TFs and HMs, its cell-line-specificity and diseases-specificity can be investigated.
Thanks to ENCODE Consortium, large scale data on whole-genome localization of protein–DNA binding sites [18, 19] and the absolute concentration of transcripts are available [20]. Particularly in some cell lines it provided the comprehensive ChIP-seq and matched RNA-seq data, for example genome-wide binding landscape of many TFs and HMs and target gene expression are available in human GM12878 and K562 cell lines (Additional file 1: Table S1). This allows us to investigate the relationship among TFs binding, HMs location, and gene expression in a systematic and quantitative manner. Meanwhile we can probe the dynamics of TF and HM co-localization in normal and cancer cell lines.
We propose a two-step integrative pipeline for ChIP-seq and RNA-seq. We first analyze and identify cooperation of TF and HM as well as the dynamics across normal and cancer cell lines. Then we investigate the regulatory potency of all these cooperations in gene expression process. To this end, we extracted signal peaks from the ChIP-seq data for 55 TFs and 11 HMs and the gene expression level from the RNA-Seq data in human GM12878 and K562 cell lines (Additional file 1: Table S2). The localization of 55 TFs and 11 HMs were analyzed in the upstream and downstream region of transcription start sites in the two cell lines. We observed three types of localization patterns, GM12878_rich_factor, K562_rich_factor, and unbiased_factor, based on their binding enrichment around TSS. Then, we compared the overlap ratio and the average overlap ratio of TFs' binding or HMs in two cell lines. The results are further used to analyze potential cooperation of TFs and HMs. Finally, we build a SVM classifier to predict the highly and lowly expressed genes by utilizing the TF or HM association strength (TFAS) [21]. We found that two HMs (H3k9ac and H3k27ac) and three TFs (ELF1, TAF1, and POL2) are predictive with the accuracy about 85~92%. The highest prediction accuracy is 93% obtained by 66 factors model. Our research provides new insight for the cooperation of TFs and HMs on gene expression and is helpful for the study of the cooperation of various factors.
The dynamics of TF and HM localization
We develop a two-step analysis pipeline (Fig. 1) to integrate ChIP-seq, RNA-seq, and genome annotation to pinpoint the unique roles of transcription factor and histone modification in biological processes and particularly their location at specific DNA region. Importantly we correlate TFs binding and HMs with gene expression level to detect reliable co-operations related with downstream effects. Crossing cell line comparison further indicate dynamic pattern of those co-operations.
The two-step integrative pipeline to analyze matched ChIP-seq and RNA-seq data
Starting from the whole genome localization information produced by ChIP-seq experiment, we counted the peak number of 55 TFs and 11 HMs in two cell lines. As shown in Fig. 2a, the results indicated that the peak number is from 211/207 to 52,162/77,063 in GM12878/K562. H3k4me1 has a lot of peaks while POL3 has a few peaks. For some TFs or HMs, their peak numbers in two cell lines are quite different. If we set a and b as the total numbers of a given TF or HM in two cell lines, the values of |a ‐ b|/a + b for JUND, ATF3, BCL3, and MAFK are 0.88, 0.81, 0.81, and 0.72 respectively. And the maximum value of |a ‐ b|/a + b among 11 HMs is obtained by H3k27me3 with 0.36. On the other hand, the values of |a ‐ b|/a + b for POL3, PML, TAF1, and CTCF are 0.01, 0.02, 0.03, and 0.04 respectively. It shows that the numbers of peaks of these transcription factors are consistent in two cell lines.
The dynamics of TF and HM localization between GM12878 and K562. a The peak numbers of 55 TFs and 11 HMs in two cell lines. The X-axis is the number of peaks, and the Y-axis represents the name of TF/HM. b The signal intensity of six factors in a 40 kb DNA region which was separated into 200 bins flanking TSS in two cell lines. Each bin is 200 bp in size. The X-axis is the relative position of bins, and the Y-axis is the signal intensity of a given TF/HM. c The total difference index of 55 TFs and 11 HMs between GM12878 and K562. The X-axis represents the name of TF/HM, and the Y-axis is the total difference index
We next check the signal features of TF binding and HM around TSSs, which are important for gene expression and regulation [13, 21, 22]. For 9555 genes, in two cell lines, we calculated the signal intensity of 55 TFs and 11 HMs in each of the 200 bins and obtained their distribution features in a 40 kb DNA region. It turned out that the signal peaks are concentrated in 4 kb region centered on TSS. The closer a bin gets to the TSS, the stronger the signal intensity of TFs or HMs. The distribution of six factors in two cell lines was shown in Fig. 2b. The signal intensities of CTCF and H3K4me1 show very similar distribution. But, some TFs or HMs have large variation such as BCL3, USF2, MAFK and JUND. Overall, there are three types of TF and HM based on their binding enrichment around TSS in two cell lines. We named them GM12878_rich_factor, K562_rich_factor, and unbiased_factor respectively for the follow-up study. Compared with HM, the variation of TF is larger. The results indicated that the signal intensity carries rich information to compare TFs binding and HM between normal and cancer cell lines.
To quantify the variation of TF binding or HM between two cell lines, we propose the total difference index Dsignal and the ratio f to investigate the dynamics of TF or HM localization between the two cell lines (refer to eq. (2) and (3) in Methods section for the details). The rank of Dsignal for all 66 factors shown in Fig. 2c can indicate the trend of all TFs' and HMs' variation between cell lines, and is used for analyzing their dynamic in two cell lines. The results showed some factors such as CTCF do not change much. Those factors mostly belong to the unbiased_factor set (32 factors) with 0.6 < f < 1.5 and −0.25 < Dsignal < 0.2. This is consistent with the fact that CTCF works as a general transcription factor and is involved in many cellular processes, including transcriptional regulation, insulator activity, and regulation of chromatin architecture. BCL3 and JUND showed obvious difference. They belong to the GM12878_rich_factor set (15 factors) with f > 1.5 and Dsignal > 0.2 and the K562_rich_factor set (19 factors) with f < 0.6 and Dsignal < − 0.25 respectively (Table 1). This demonstrates that our new index Dsignal provides rich information to abstract TFs or HMs with cell line specificity for further investigation.
Table 1 Three sets of TF/HM based on their enrichment around TSS
The dynamics of TF-TF co-localization
We next explore the cooperative interactions among TFs and HMs. In order to test the co-localization of TF and HM for genome-wide and enhancer regions, we calculated the overlap ratio Ro for all pairs of 55 TFs (Fig. 3a, b, d and e). Then, the Pearson correlation coefficient (PCC) of the Ro values for the 1485 TF pairs in two cell lines was calculated. The high correlation 0.73 (p-value< 2.2e-16) suggests that the co-localizations are overall conservative (Fig. 3d). The overlap ratio of RAD21 and SMC3 are 78.2% and 81.4% for genome-wide and enhancer regions separately in GM12878, and the value are 75.6% and 91.2% for genome-wide and enhancer regions separately in K562. For the combination of POL2 and TAF1, The overlap ratio are 76.1% and 80.7% in GM12878 and 84.6% and 94.6% in K562 separately. The results showed that there are stronger co-binding in enhancer regions for some TF pairs. In contrast, the overlap ratio between ZNF274 and any other TFs is almost zero which is may due to the less peaks of ZNF274 (233 in GM12878 and 305 in K562) according to the results of peak counting above. Based on the pairwise relationship, the combination patterns of three TFs with higher overlap ratio were obtained. POL2 + TAF1 + TBP (TATA Box Binding Protein) and Rad21 + SMC3 + CTCF show strong combination. The overlap ratios among them are more than 60%. By comparison, we found that their signal distribution around TSS was largely consistent (The total difference indexes are − 0.03, 0.03 and 0.11 for CTCF, RAD21, and SMC3). For the combination of Rad21 + SMC3 + CTCF, the results were consistent with previous works that CTCF is required to recruit cohesin complex members consist of Smc1/Smc3 heterodimers and two non-Smc subunits Scc1 (Rad21) and Scc3 to shared sites [16, 19, 23,24,25]. Furthermore, we obtained similar results for Rad21 + SMC3 + CTCF in Helas3, Sknsh, and Hepg2 cell lines. It demonstrates that higher overlap feature of the three TFs have certain conservation across cell lines (Table 2).
The overlap analysis of TF pairs in GM12878 and K562. The distribution of the overlap ratio for 1485 TF pairs in GM12878 ((a) for genome-wide and (d) for enhancer region) and K562 ((b) for genome-wide and (e) for enhancer region). The X-axis is the value of the overlap ratio, and the Y-axis is the number of TF pairs. c The distribution of the relative variation index IRV. The X-axis is the value of the relative variation index, and the Y-axis is the number of TF pairs. The left and right lines located the position with μ ± 2σ. And μ is the mean and σ is the standard deviation of the relative variation index IRV. f The scatter plot and the Pearson correlation coefficient of the overlap ratio for 1485 TF pairs between two cell lines. The X-axis and Y-axis are the overlap ratios of TF pairs in GM12878 and K562 respectively. Here RG and RK indicate the overlap ratios of TF pairs in GM12878 and K562 respectively
Table 2 The overlap ratios of TF combinations in five cell lines
Importantly, we found one strong Hi-C experimental data to support our finding in Table 2 and provide better understanding for the consistency of combination Rad21 + SMC3 + CTCF across five cell lines. Rao et al. used in situ Hi-C to probe the 3D architecture of genomes, constructing haploid and diploid maps of nine cell types [26]. The densest, in human lymphoblastoid cells, contains 4.9 billion contacts, achieving 1 kb resolution. They found that in GM12878 the vast majority of peak loci are bound by the insulator protein CTCF (86%) and the cohesin subunits RAD21 (86%) and SMC3 (87%). This result is consistent with our finding for CTCF+SMC3 + RAD21 combinations. This finding is also supported by numerous reports, using a variety of experimental modalities, that suggest a role for CTCF and cohesin in mediating DNA loops. Because many of these loops demarcate domains, this observation is also consistent with studies suggesting that CTCF delimits structural and regulatory domains [27,28,29]. They found that most peak loci encompass a unique DNA site containing a CTCF-binding motif, to which all three proteins (CTCF, SMC3, and RAD21) were bound [26]. They were thus able to associate most of the peak loci (6991 of 12,903, or 54%) with a specific CTCF-motif "anchor". This supports that CTCF, SMC3, and RAD2 co-localization serves important role in 3D chromatin structure.
On the other hand, no matter how strong the total correlation is, the overlap ratios of some TF pairs show great changes. Let RG and RK be the overlap ratios of TF pairs in GM12878 and K562 respectively, the relative variation index IRV between GM12878 and K562 is measure by (RG − RK)/(RG + RK + α) (Fig. 3c). Here α=0.001 is added to avoid the case that RG + RK equals zero. The mean μ and the standard deviation σ of IRV are − 0.05 and 0.36. And 90/1485 TF pairs are with significant variation falling outside μ ± 2σ.
By requiring the overlap ratio of TF pairs in both cell lines larger than the third quartile, we got 17 TF pairs (Table 3). For those TF pairs, their overlap ratios are with large changes between two cell lines. We found that there are 13 TF pairs related with JUND and only two TF pairs (BCL3:P300 and PML:USF1) have higher RG.
Table 3 TF pairs with cell line specificity
On the other hand, by calculating the TFAS value of 55 TFs based on their signal peaks in 40 kb region centered on TSS, we obtained the PCC values of TF pairs to explore its interaction tendency. The POL2 + TAF1 + TBP and Rad21 + SMC3 + CTCF combinations display higher PCC values. The results are consistent with the above analysis (Table 4).
Table 4 TF pairs with top 10 PCC in GM12878 and K562
By choosing a threshold, we obtained a TF interaction network as shown in Fig. 4. We use different node colors to label the GM12878_rich_factor, K562_rich_factor, and unbiased_factor. The edge colors indicate the specificity in different cell lines (GM12878_specificity_TF pairs, K562_specificity_TF pairs, and unbiased_TF pairs). The network shows that JUND serves as a hub in K562 and plays important roles in cancer by interacting with other TFs. It's also interesting that JUND cooperates with ATF3 and together working with chromatin factors P300 and CEBPB. While in GM12878, BCL3 alone works with P300 and may guide the chromatin factor to activate regulatory regions. Comparing with the giant complex in K562, GM12878 uses a very different strategy. CTCF + RAD21 + SMC3 and POL2 + TBP + TAF1+ PML are tight clusters in the network and required in both cell types. This TF and chromatin factor co-operation is consistent with previous studies that HMs regulate gene transcription by modulating local chromatin state and thereby changing the binding status of TFs within gene regulation regions [13, 30]. And the analyses based on the experimental data indicated that distinct HM patterns appear around TF binding sites, and the ChIP-seq signals of TFs binding and HMs are highly predictive of each other [30,31,32]. Based on the clique like interaction, we can predict that TBP and PML cooperate.
The interaction network among TFs. The node color labels the TF type (Red: GM12878_rich_factor; Blue: K562_rich_factor; Green: unbiased_factor) and the edge color indicate the specificity of TF pairs in different cell lines (Red: GM12878_specificity_TF pairs; blue: K562_specificity_TF pairs; Green: unbiased_TF pairs)
Next we add the HMs in the cooperation analysis. Based on the peak signal data of 11 HMs, the overlap ratios between 11 HMs and 55 TFs were calculated for GM12878 cell line. The results showed that there was consistency for the overlap features of 11 HMs with TFs. But the overlap ratio of the same HM with different TFs had large variations (Additional file 1: Figure S1). Part of HMs (H3K9ac and H3K79me2) obtained higher overlap ratio greater than 50%, which indicated close relationship between these HMs and TFs. The studies in K562 give us consistent conclusions.
The average overlap ratio of TFs and HMs
To get a clear understanding of the potential cooperativity between a certain TF and other TFs, we defined a new parameter Rav named the average overlap ratio. For each TF or HM, we calculated its Rav and found that the Rav values of 66 factors presented clear divergence in a cell line. It is a range from 40 to 3%. Among them, COREST, CMYC, ELK1, ETS1, and BCLAF1 are the top 5 TFs with the higher Rav in GM12878, and CREB1, ELK1, BCLAF1, POL3, and BCL3 are the top 5 TFs in K562 (Fig. 5a), with two common factors ELK1 and BCLAF1. Next, we found that the average overlap ratios of some TFs have significant variation between GM12878 and K562. The Rav values of each TF in the two cell lines are roughly consistent for most TFs, with the exception of a few TFs including BCL3 and CREB1. For example, in K562, CREB1 is the TF with the top location in the Rav list, but in GM12878 its relative location is ranked in 33. Both of them are related with Leukaemia [33]. BCL3 gene is a proto-oncogene candidate which is identified by its translocation into the immunoglobulin alpha-locus in some cases of B-cell leukemia. And CREB (cyclic AMP response element-binding protein) is a transcription factor associated with neoplastic myelopoiesis by regulating RFC3 (Replication factor C3) expression [34]. The results indicated that the TF combination patterns have specificity in GM12878 and K562 cell lines.
The average overlap ratio of TFs and HMs. a The average overlap ratio of 55 TFs in two cell lines. b The average overlap ratio of 11 HMs with other HMs (left) or 55 TFs (right). The X-axis represents the name of TFs/HMs, and The Y-axis represents the average overlap ratio
On the other hand, for the TF pairs with higher overlap, its average overlap ratio is lower. For example, no matter how big the total peak number or the overlap ratio for CTCF is, its average overlap ratio is always the lowest. The average overlap ratio of CTCF is about 8%, although the combinations of CTCF with Rad21 or SMC3 have a higher overlap ratio about 70% in both cell lines. In opposite, some TFs have lower overlap ratio, but they have higher average overlap. For instance, the overlap ratios of ATF3 are less than 2% with CDP, EZH2, JUND, POL3, PU.1, RAD21, and ZNF274, however its average overlap ratios are 28.4% in GM12878 and 23.02% in K562. The average overlap ratio of TFs provides a new clue about its overall interaction capability with other TFs.
TF and HM are two types of critical factors that coordinately regulate gene transcription. As a consequence, TF-binding and histone-modification are often highly correlated in TSS proximal regions. Based on the same definition, we calculated the average overlap ratio of a HM with other 10 HMs as well as 55 TFs in two cell lines (Fig. 5b). The results indicated that HMs related with gene silencing such as H3K27me3 and H3K9me3 have lower Rav, but ones related with gene activating have higher Rav such as H3K9ac and H3K27ac. This results show a certain coincide with Bieberstein's studies [35]. Their researches presented that the activating histone modifications H3K4me3 and H3K9ac mapped to first exon-intron boundaries to help recruit general transcription factors (GTFs) to promoters [36]. It is possible that the marks changes chromatin states by affecting the affinity between histone and DNA, and further produce an effect on the TF binding with DNA. Among them, H3K9ac exhibit a maximum Rav which is 80%. That provides a great chance for histone modification to model TF binding affinities. As a result, HMs could help the prediction of TF binding sites [31].
Pinpoint TFs and TF-TF interaction with gene expression
We next pinpoint TFs and TF-TF interaction to predict their downstream effect, i.e., predicting gene expression level with TFs or HMs. As we know, gene expression has cell line or tissue variation. The prediction of gene expression level in a particular tissue and its dynamics across tissues are very important for the study of expression regulation. Here we look at the relative contribution of each factor in more details in order to understand gene regulatory mechanism. We constructed a classification model based on SVM to examine the relative importance of each individual factor [37]. Based on the FPKM (fragments per kilobase of exon per million fragments mapped) values [20], all of 9555 genes were classified into two categories with high or low expression level. Then, the relative importance can be represented by the predicting capability for discriminating gene categories as high or low expression level in human genome. In each cell line, the SVM model was built for each TF or HM with its association strength (TFAS) as inputs and gene's group (high or low expression level) as outputs.
Firstly, we constructed a SVM model for the identification of gene expression level using each TF or HM as the single predictor. The prediction accuracies were shown in Table 5. Strikingly, most TFs alone can predict gene expression levels with fairly high accuracies. By direct comparison, TFs and HMs presented different capability for predicting gene expression level. We found that some factors such as H3k9ac, H3k27ac, ELF1, TAF1, and POL2 were significantly more predictive than other factors. These factors mostly possess transcriptional activation function and have more peaks. These TF bindings are essential for transcriptional initiation of most promoters, and therefore it makes sense that their binding signals have the highest predictive capabilities. In contrast, other factors such as MAFK, POL3, ZNF274, EZH2, NFE2, and TR4 were significantly less predictive. Those factors generally have lesser peaks and tend to have specific or complex functions. It is expected that these TFs such as POL3 are less predictive because they are involved in initiating transcription of only a small fraction of promoters. This provides a clue that the factors with more peaks are related with cell type non-specific genes and the factors with less peaks are related with cell type specific genes. Furthermore, the TFs or HMs with low average overlap ratio may be associated with expression of cell type specific genes. In general, Enrichments (with more binding peaks) of HM or TF at transcription start site are positively related to its high predictability.
Table 5 The prediction accuracies of gene expression level for 66 factors in two cell lines (Acc values)
Next, the total 66 association strengths of 55 TFs and 11 HMs were used to predict gene expression level and the highest classification accuracy is achieved as 92.2% and 93.7% for GM12878 and K562 respectively (Table 6). We found that the 66 factors model could identified genes with a slightly higher accuracy than the single factor models. The accuracies are 3% and 1.9% more than the highest prediction accuracies with single factor. The high prediction accuracies across two cell lines suggested the strong correlations between gene expression level and TF binding or HMs in two considered cell conditions. But, the limited improvement also illustrated that there are a certain extent redundancy between factors which means they share a similar amount of information for "predicting" gene expression level.
Table 6 The prediction accuracies of gene expression level in two cell lines
Based on the prediction results of single factor, for any TF or HM, we defined a prediction difference index between two cell lines.
$$ {D}_{Acc}=\frac{Acc^G-{Acc}^K}{Acc^G+{Acc}^k} $$
Where AccGand AccK are the prediction accuracies of a given TF or HM in GM12878 and K562 separately. The rank list of DAcc are shown in Fig. 6a.
The prediction difference index for TFs/HMs. a The rank list of the prediction difference index DAcc for 66 factors. The X-axis represents the name of TF/HM, and the Y-axis represents the prediction difference index. b The correlation properties between the prediction difference index DAcc and the total difference index Dsignal. The X-axis is the prediction difference index, and the Y-axis is the total difference index
We then extracted the factors with the top ten DAcc (DAcc > 0) or the bottom ten (DAcc < 0) as inputto constructed the SVM model of expression level prediction. The prediction accuracies of the top ten are 89.5% and 87.8%, and the bottom ten are 80.0% and 89.1% respectively in two cell lines. As shown in Table 5, the prediction performances of the top ten TF and HM signals almost achieved the highest accuracies which are ~ 2.7% and ~ 4.6% lower than the performance by the full factors model. And this result is even lower than the prediction of some single factor.
In Fig. 6, we found that the prediction difference index DAcc is consistent with the total difference index Dsignal which is a parameter represented the dynamic variation of a TF binding or HM between two cell lines. To further demonstrate the relationship between Dsignal and DAcc, we then calculated the Pearson correlation coefficient as 0.84 (Fig. 6b). The results directly indicated that the dynamic variation of TF binding or HM distribution around TSS between two cell lines is positively related to its prediction power difference of gene expression level. Meanwhile, the results also illustrated that the predicting power of a TF/HM would present obvious difference if its binding has dynamic variation around TSS between two cell lines. We suppose these factors with great dynamic variation should be strongly associated with cell line specific regulation. For example, JUND may be related with specific vital process in K562. On the other hand, the factors with higher predictive capability such as H3K9ac and H3k27ac barely appeared the variation among cell lines. In general, they should take part in the basic regulation processes.
Interaction of TFs and HMs
Co-occupancy of TF binding is a key mechanism for fine regulation of gene expression. However, there are no reliable approach for computationally measuring the degree of TF-TF cooperation and quantitatively modeling the dynamic variation between cell lines. We here introduced a set of statistical indexes to investigate the degree of TF-TF or TF-HM genome-wide overlap in TSS region. The overlap ratio of TFs provides a quantitative parameter for measuring the degree of TFs interaction. The higher the value is, the greater the chance of their interaction to regulate gene expression. On the contrary, TFs with low overlap ratio should be mutually-exclusive. We obtained some TF combinations confirmed by previous experiments, also found new combinations for further experiments. In addition, dynamics among cell lines provided an approach to study the dynamic of TFs or HMs cooperation in the regulation process of gene expression. We suppose that their interactions of TF combinations with little variation are conserved in two cell lines. In fact, the prediction of TF binding site by histone marks, or vice versa, substantially depends on their higher co-occupancy. Also it gives us a clue for information redundancy analysis of TFs or HMs in predicting of gene expression level. We can extract a set of TFs or HMs based on the overlap analysis for predicting models.
Meanwhile, the analysis of dynamic or conservation for the combination of TF pairs is able to capture the vast complexity of colocalization patterns, resulting in identification of many previously known interactions. For example, we identified ATF3:JUND as a K562-specific combination. In fact, the ATF3/JUND heterodimer preferentially binds to an AP-1-like site and are most likely the important mediators of the response because overexpression of JUND [38]. On the other hand, we found the conservative combination CTCF:Rad21 which act as host cell restriction factors for Kaposi's sarcoma-sssociated herpesvirus (KSHV) lytic replication by modulating viral gene transcription [39]. In addition to some confirming known combinations, we also found additional colocalization patterns that have not been previously documented. These may exist as entirely novel combinations for further confirmation. Our results provide many insights into TF colocalizations that define the regulatory code of humans.
The relative importance of TFs and HMs for classification
The accurate regulation of gene expression is a complex process and many TFs and HMs participated. In previous studies, it has been shown that TF binding and histone modification are predictive for expression levels of mRNA transcripts in some cell lines. However, these studies have been limited to a limited number of TF or HM data at that time. In 2010, Karlic et al. [14] systematically analyzed 38 HM and they only used the numbers of tags for each histone modification or variant in 4 kb surrounding the TSSs. They did not consider the distance between HM and TSS. In our paper, not only HM but also TF association strength (TFAS) that integrated all the peak intensity of a TF/HM by considering their proximity to a gene is used to predict gene expression level. Next, we built the SVM model with single TF or HM to predict binary classification as high or low gene expression and evaluated the performance using accuracy. But Ouyang et al. [21] and Cheng et.al [13] employ the correlation to evaluate the predictive power by calculating the Pearson correlation coefficient (PCC) value between the observed gene expression values and the predicted values. Our method is more straight-forward to capture the main signals with comprehensive data.
In particular, the relative importance of these factors in the regulation of gene expression is still under debated. Furthermore, it is a long way to go to precisely quantify the expression level of each gene. In this study, we avoid this challenge by an alternative way to classify the high and low expression genes. We constructed a SVM model with single TF or HM and focus on investigating the relative contribution of TF binding or HMs in the prediction of gene expression level. By listing TFs and HMs based on the predicting power, we can understand their potential capability in gene regulation. The results show that the prediction accuracies vary significantly with the substitute among HMs and TFs. Furthermore, our results suggest that two types of HMs (H3k9ac and H3k27ac) with activation expression function and three TFs (ELF1, TAF1, and POL2) are predictive for gene expression with the accuracy about 85~92%. And the active TFs have higher prediction power than the repression TFs. And the highest predictive accuracy was achieved for gene classification by the 66 factors model.
We compare the predictive difference of a certain TF or HM between two cell lines. The results indicated that some factors change dramatically. We have previously shown that the single factor model for gene expression prediction is cell line specific. The best prediction accuracies are achieved by H3K9ac in GM12878 but POL2 in K562. In addition, TFs and HMs show different relative importance in different cell lines. A TF might be active and exhibit significant influence on gene expression in K562, but inactive with little effect on gene expression in GM12878. For example, JUND shows a relatively stronger effect on gene expression in K562 than in GM12878 while MXIL shows the opposite trend. Based on the correlation analysis of Dsignal and DAcc, we found that the variation of predicting power is closely related with its distribution dynamic variation around TSS in two cell lines. And those TFs with simplex function always present higher predictive capability, for instance, active factors such as H3K9ac, ELF1, TAF1, POL2, H3K27ac, EGR1, or repressive factors such as MXI1 and CDP. But the prediction power of the TFs with complex or bidirectional functions such as ATF3, CTCF, and SRF is weak.
Simplified model with six factors
For a few TFs and HMs with higher predictive power, we found their total difference indexes Dsignal are the lowest, and their overlap ratio and average overlap ratio are high. For example, the prediction accuracies of POL2, TAF1, and TBP are 86.6%, 87.6%, and 79.9% in GM12878 and 91.8%, 90.2%, and 88.9% in K562. Meanwhile,the TFs with highest overlap ratio but lower average overlap ratio have moderate prediction power such as Rad21, SMC3, and CTCF. Their prediction accuracies are 60.0%, 57.9%, and 61.5% in GM12878 and 58.6%, 55.3%, and 60.4% in K562.
Then, a six factors model including POL2, TAF1, PML, ELF1, H3K27ac, and H3K9ac was constructed. The six factors chosen have transcriptional activation function and higher predictive power. The prediction accuracies are 92.0% and 93.3% and pretty close to the prediction accuracy of all 66 factors. Adding other TF/HM features cannot improve the prediction power of gene expression level. The results give us an idea that some major factors are the most useful in predicting of gene expression level. This observation is consistent with the results in [21] that only a handful of TFs' binding can explain the large percentage of expression variance. From our study, we can extract key TFs or HMs based on the analysis of the overlap and average overlap ratio to predict gene expression level.
Future extension with cis-regulatory element annotation
We acknowledge the limitation that we mainly focus on the cooperation in trans level. It's well known that the cis-regulatory elements (specifically enhancer) are important to work together with trans-element (TF and HM) to precisely determine the downstream gene expression. Here we focus on the complexity at trans level, i.e., the combinatorial effect of TF and HM by checking their co-localization in regulatory element and downstream gene expression effect. We implicitly consider the enhancer by looking at the distal binding peaks of TF and HM and summarize the binding strength. However, we didn't look at the specific "enhancer" region together with co-localized TFs/MHs, which will provide more detailed and enriched information. Furthermore, we simplified the multiple to multiple mappings between regulatory regions to target genes. We will extend the current work to TF, HM and regulatory element cooperations. In future we will also integrate some new data types, for example ATAC-seq and Hi-C/HiChIP, and hold the promise to provide binding profiles for many TFs once and high resolution regulatory element-gene association.
In summary, we analyzed the distribution and overlapping state of TF and HM and obtained three types of TF and HM (GM12878_rich_factor, K562_rich_factor and unbiased_factor) based on their enrichment around TSS in two cell lines. We calculated the overlap ratio of 1485 TF pairs to test the genome-wide co-localization in two cell lines. The correlation analysis indicated that their co-localizations are overall conservative, but 17 TF pairs are highly dynamic between GM12878 and K562. Using TF or HM association strength with gene, we investigated the regulatory potency of TF/HM in predicting gene expression level and their dynamics variation between cell lines. Those studies provided a detailed correlation analysis of the 66 regulatory factors, and new insight for the cooperation of TFs and HMs on gene expression. The results are helpful in understanding interaction patterns of TF/HM as well as their cell line specificity in the gene expression and regulation process.
In short, we integrate ChIP-seq and RNA-seq data to explore TF/HM interactions related with gene expression and further their dynamics across cell lines. These researches are helpful for the further study of the interaction for various factors in the gene expression and regulation process. In methodology, we propose a set of novel indexes to study the interaction among TF/HM, and provide new insight for the dynamic regulation of TFs and HMs on gene expression. We constructed a SVM model for the identification of gene expression level using each TF or HM as the single predictor. By listing TFs and HMs based on the predicting power, we can further investigate the regulatory potency of TF and HM.
Matched RNA-seq and ChIP-seq data
The genomic coordinates of the Hg19 human Refseq genes were downloaded from UCSC (http://genome.ucsc.edu/cgi-bin/hgTables). We excluded overlapping gene transcripts in 20 kb region upstream and downstream of TSS and leaved a set of 9555 genes for analysis. In GM12878 and K562, ENCODE Consortium (https://www.encodeproect.org/) provided the comprehensive ChIP-seq for TFs and HMs and matched RNA-seq data. The ChIP-seq data of 55 TFs (narrow peaks format) and 11 HMs (broad peaks format) in common in both cell lines were extracted for the following analysis and calculation. The peak data shows context specific location in whole genome for a specified transcription factor binding or histone modification in a given cell type. This allows us not only to analyze TF/HM co-localization in one cell line but also compare co-localization dynamics across cell lines.
The matched RNA-seq data of GM12878 and K562 were also obtained from ENCODE. Based on the FPKM definition (fragments per kilobase of exon per million fragments mapped), the gene expression levels of 9555 genes were calculated by Cufflinks algorithm [20, 40, 41] according to the RNA-seq expression profiles in two cell lines. Then all genes were divided into 4 clusters by quartile according to the FPKM. The top 25% genes (2389 genes, FPKM≥3.58) and the bottom 25% genes (2389 genes, FPKM≤2.9 × 10− 5) were classified as highly and lowly expressed genes, respectively, in GM12878. And the top 25% genes (2389 genes, FPKM≥3.68) and the bottom 25% genes (2389 genes, FPKM≤0.9 × 10− 5) were classified as highly and lowly expressed genes, respectively, in K562 (Additional file 1: Figure S2).
Total difference index
To understand the dynamics of TFs and HMs among cell lines, we focus on their distribution characteristics and differences near TSS. Firstly, a 40 kb DNA region flanking TSS for each transcript was separated into 200 bins. Each bin is 200 bp in size. Then, we obtained 200 bins centered at TSS (20 kb upstream and 20 kb downstream). We assumed that the mid-point of signal peaks is the interaction site between TFs (or HMs) and DNA. For a given TF or HM, we counted the number of peaks in the jth bin of the ith gene for the αth cell line called \( {N}_{ij}^{\alpha } \). Then, the signal intensity \( {S}_j^{\alpha } \) in each of the 200 bins in the αth cell line was calculated with n genes by the following formula.
$$ {S}_j^{\alpha }=\frac{10^3}{n}\sum \limits_{i=1}^n{N}_{ij}^{\alpha}\left(\alpha =G,K\right) $$
Here n equals to 9555. GM12878 is denoted as G, and K562 is denoted as K.
Next, we defined a total difference index Dsignal as follows to investigate the dynamics of TF or HM localization between the two cell lines.
$$ {D}_{signal}=\frac{\sum \limits_j{S}_j^G-\sum \limits_j{S}_j^K}{\sum \limits_j{S}_j^G+\sum \limits_j{S}_j^K} $$
And the ratio of the signal intensity between GM12878 and K562 can be denoted by
$$ f=\frac{\sum \limits_j{S}_j^G}{\sum \limits_j{S}_j^K} $$
The overlap ratio and the average overlap ratio
For further investigating the potential interaction among TFs, the genome-wide overlap degree of each TF pair was analyzed. As shown in Fig. 7, the overlap state is estimated by the following formula,
$$ \left|{S}_1-{S}_2\right|<\frac{L_2+{L}_1}{2}. $$
The schematic diagram of the overlap state between TF1 and TF2. There are two peaks from TF1 and TF2 respectively. L1 and L2 are the peak widths, and S1 and S2 are the peak centres of TF1 and TF2 respectively
Here L1 and L2 are the peak widths, and S1 and S2 are the peak centers of TF1 and TF2 respectively. Then, the overlap state is encoded into binary states (equal to 1 if formula (4) is holds; otherwise 0). We defined the overlap ratio as follows,
$$ {R}_o=\frac{2n}{N_1+{N}_2}. $$
Where n is the number of the overlapping peaks between two TFs, and N1 and N2 refer to the total peak number of TF1 and TF2 respectively. The value indicates the genome-wide co-localization degree of two TFs. We assume that the cooperativity and the co-localization degree are closely related.
Given a transcription factor, such as TF1, with m binding peaks (P1,P2,…Pm), we investigated the overlap state of each peak with other TFs' peaks and obtained a vector \( \overrightarrow{X}=\left\{{x}_1,{x}_2,\cdots {x}_m\right\} \) for m peaks. And xi(i = 1, 2, ⋯, m) is the number of transcription factors which have at least one peak overlapped with the ith peak of TF1 (Additional file 1: Table S3). We defined the average overlap ratio Rav as follows,
$$ {R}_{av}=\frac{1}{m}\sum \limits_{i=1}^m\frac{x_i}{N} $$
Here, the total number of other TFs is represented by N, and it is 54 in this study. The parameter Rav indicate the extent of potential interaction for this TF with other TFs.
TF or HM association strength to target gene
Ouyang et al. [21] defined TF association strength (TFAS) which integrated all the peak intensity of a TF by considering their proximity to a gene. Let gk be the intensity of the kth binding peak of TFj or HMj and dk be the distance between the TSS of gene i and the kth binding peak, the TFAS of TFj or HMj on gene i is expressed by
$$ {A}_{ij}=\sum \limits_k{g}_k{e}^{-{d}_k/{d}_0} $$
Here we sum all the binding peaks(k)of a given TF or HM within a sufficiently large distance (20 kb upstream and 20 kb downstream of TSS) of gene i. We set d0 equal to 2 kb which depends on the distance distribution of TF signal peaks.
The strength correlation of TF pairs around TSS
TFAS is designed to measure the strength of a TF regulating its target gene. Here, we introduced TFAS to analyze the potential interaction between transcriptional factors in TSS region. For n genes, we calculated the TFAS value of 55 TFs based on their signal peaks in 40 kb region centered on TSS. Then, the potential interaction of a pair of TFs was estimated by Pearson correlation coefficient (PCC) of two sets of TFAS values. For example, the PCC between TFx and TFy was calculated as follow
$$ {p}_{x,y}=\frac{\sum \limits_{j=1}^n\left({x}_j-\overline{x}\right)\left({y}_j-\overline{y}\right)}{\sqrt{\sum \limits_{j=1}^n{\left(x-\overline{x}\right)}^2\sum \limits_{j=1}^n{\left(y-\overline{y}\right)}^2}}. $$
Where X : {x1, x2, …, xn} and Y : {y1, y2, …, yn} are the vectors of the TFAS values for TFx and TFy, \( \overline{x} \) and \( \overline{y} \) are the means of X and Y. The PCC values (−1 ≤ px, y ≤ 1) provided a new criterion to explore TF pair's potential interaction. The higher PCC, the stronger interaction tendency.
SVM classifier
We used libSVM to predict the gene expression level [37] using the TFAS value of individual TFs (or HMs) and their combinations as feature. We predict the binary expression level of gene (high/low) and analyze and compare the predictability or contribution of TF and HM on gene expression in GM12878 and K562. A comprehensive list of 66 factors including 55 TFs and 11 HM were used.
Prediction evaluation
According to 5-fold cross-validation, 9555 genes were randomly partitioned into 5 sets with equal sizes. A single set is retained as the validation data for testing the model, and the remaining 4 sets were used as training data. The process is repeated 5 times, with each of the 5 sets used exactly once as the validation data. The 5 results were averaged to produce a single estimation. Finally, the prediction accuracy are estimated by sensitivity, specificity, and accuracy as follows.
$$ {S}_n=\frac{TP}{TP+ FN},{S}_p=\frac{TN}{TN+ FP}, Acc=\frac{S_n+{S}_p}{2} $$
Here, TP and TN are the number of true positives and true negatives. It means genes with high (low) expression level are predicted correctly. FN and FP are the number of false negatives and false positives. It means that genes with high (low) expression level are predicted incorrectly.
FPKM:
Fragments per kilobase of exon per million fragments mapped
HM:
PCC:
Pearson correlation coefficient
TF:
TFAS:
TF or HM association strength
TSS:
Transcription start site
Hu ZH, Gallo SM. Identification of interacting transcription factors regulating tissue gene expression in human. BMC Genomics. 2010;11:49.
Veerla S, Ringner M, Hoglund M. Genome-wide transcription factor binding site/promoter databases for the analysis of gene sets and co-occurrence of transcription factor binding motifs. BMC Genomics. 2010;11:145.
Costa IG, Roider HG, do Rego TG, de Carvalho Fde A. Predicting gene expression in T cell differentiation from histone modifications and transcription factor binding affinities by linear mixture models. BMC Bioinformatics. 2011;12(Suppl 1):S29.
Gong W, Koyano-Nakagawa N, Li T, Garry DJ. Inferring dynamic gene regulatory networks in cardiac differentiation through the integration of multi-dimensional data. BMC Bioinformatics. 2015;16:74.
Farnham PJ. Insights from genomic profiling of transcription factors. Nat Rev Genet. 2009;10(9):605–16.
Li B, Carey M, Workman JL. The role of chromatin during transcription. Cell. 2007;128(4):707–19.
Wang J, Malecka A, Troen G, Delabie J. Comprehensive genome-wide transcription factor analysis reveals that a combination of high affinity and low affinity DNA binding is needed for human gene regulation. BMC Genomics. 2015;16(Suppl 7):S12.
Berger SL. The complex language of chromatin regulation during transcription. Nature. 2007;447(7143):407–12.
Schmidt F, Gasparoni N, Gasparoni G, Gianmoena K, Cadenas C, Polansky JK, Ebert P, Nordstrom K, Barann M, Sinha A, et al. Combining transcription factor binding affinities with open-chromatin data for accurate gene expression prediction. Nucleic Acids Res. 2017;45(1):54–66.
Wang D, Rendon A, Ouwehand W, Wernisch L. Transcription factor co-localization patterns affect human cell type-specific gene expression. BMC Genomics. 2012;13:263.
He F, Buer J, Zeng AP, Balling R. Dynamic cumulative activity of transcription factors as a mechanism of quantitative gene regulation. Genome Biol. 2007;8(9):R181.
Banerjee N, Zhang MQ. Identifying cooperativity among transcription factors controlling the cell cycle in yeast. Nucleic Acids Res. 2003;31(23):7024–31.
Cheng C, Gerstein M. Modeling the relative relationship of transcription factor binding and histone modifications to gene expression levels in mouse embryonic stem cells. Nucleic Acids Res. 2012;40(2):553–68.
Karlic R, Chung HR, Lasserre J, Vlahovicek K, Vingron M. Histone modification levels are predictive for gene expression. Proc Natl Acad Sci U S A. 2010;107(7):2926–31.
Yu H, Zhu S, Zhou B, Xue H, Han JD. Inferring causal relationships among different histone modifications and gene expression. Genome Res. 2008;18(8):1314–24.
Xie D, Boyle AP, Wu L, Zhai J, Kawli T, Snyder M. Dynamic trans-acting factor colocalization in human cells. Cell. 2013;155(3):713–24.
Djekidel MN, Liang Z, Wang Q, Hu Z, Li G, Chen Y, Zhang MQ. 3CPET: finding co-factor complexes from ChIA-PET data using a hierarchical Dirichlet process. Genome Biol. 2015;16:288.
Boyer LA, Lee TI, Cole MF, Johnstone SE, Levine SS, Zucker JP, Guenther MG, Kumar RM, Murray HL, Jenner RG, et al. Core transcriptional regulatory circuitry in human embryonic stem cells. Cell. 2005;122(6):947–56.
Johnson DS, Mortazavi A, Myers RM, Wold B. Genome-wide mapping of in vivo protein-DNA interactions. Science. 2007;316(5830):1497–502.
Mortazavi A, Williams BA, McCue K, Schaeffer L, Wold B. Mapping and quantifying mammalian transcriptomes by RNA-Seq. Nat Methods. 2008;5(7):621–8.
Ouyang ZQ, Zhou Q, Wong WH. ChIP-Seq of transcription factors predicts absolute and differential gene expression in embryonic stem cells. P Natl Acad Sci USA. 2009;106(51):21521–6.
Su WX, Li QZ, Zhang LQ, Fan GL, Wu CY, Yan ZH, Zuo YC. Gene expression classification using epigenetic features and DNA sequence composition in the human embryonic stem cell line H1. Gene. 2016;592(1):227–34.
Hou CH, Dale R, Dean A. Cell type specificity of chromatin organization mediated by CTCF and cohesin. P Natl Acad Sci USA. 2010;107(8):3651–6.
Parelho V, Hadjur S, Spivakov M, Leleu M, Sauer S, Gregson HC, Jarmuz A, Canzonetta C, Webster Z, Nesterova T, et al. Cohesins functionally associate with CTCF on mammalian chromosome arms. Cell. 2008;132(3):422–33.
Schmidt D, Schwalie PC, Ross-Innes CS, Hurtado A, Brown GD, Carroll JS, Flicek P, Odom DTA. CTCF-independent role for cohesin in tissue-specific transcription. Genome Res. 2010;20(5):578–88.
Rao SS, Huntley MH, Durand NC, Stamenova EK, Bochkov ID, Robinson JT, Sanborn AL, Machol I, Omer AD, Lander ES, et al. A 3D map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell. 2014;159(7):1665–80.
Xie X, Mikkelsen TS, Gnirke A, Lindblad-Toh K, Kellis M, Lander ES. Systematic discovery of regulatory motifs in conserved regions of the human genome, including thousands of CTCF insulator sites. P Natl Acad Sci USA. 2007;104(17):7145–50.
Cuddapah S, Jothi R, Schones DE, Roh TY, Cui K, Zhao K. Global analysis of the insulator binding protein CTCF in chromatin barrier regions reveals demarcation of active and repressive domains. Genome Res. 2009;19(1):24–32.
Dixon JR, Selvaraj S, Yue F, Kim A, Li Y, Shen Y, Hu M, Liu JS, Ren B. Topological domains in mammalian genomes identified by analysis of chromatin interactions. Nature. 2012;485(7398):376–80.
Liu L, Jin G, Zhou X. Modeling the relationship of epigenetic modifications to transcription factor binding. Nucleic Acids Res. 2015;43(8):3873–85.
Wang Y, Li XM, Hu HY. H3K4me2 reliably defines transcription factor binding regions in different cells. Genomics. 2014;103(2–3):222–8.
Benveniste D, Sonntag HJ, Sanguinetti G, Sproul D. Transcription factor binding predicts histone modifications in human cell lines. P Natl Acad Sci USA. 2014;111(37):13367–72.
Hishiki T, Ohshima T, Ego T, Shimotohno K. BCL3 acts as a negative regulator of transcription from the human T-cell leukemia virus type 1 long terminal repeat through interactions with TORC3. J Biol Chem. 2007;282(39):28335–43.
Chae HD, Mitton B, Lacayo NJ, Sakamoto KM. Replication factor C3 is a CREB target gene that regulates cell cycle progression through the modulation of chromatin loading of PCNA. Leukemia. 2015;29(6):1379–89.
Bieberstein NI, Oesterreich FC, Straube K, Neugebauer KM. First exon length controls active chromatin signatures and transcription. Cell Rep. 2012;2(1):62–8.
Yun MY, Wu J, Workman JL, Li B. Readers of histone modifications. Cell Res. 2011;21(4):564–78.
Chang CC, Lin CJ. LIBSVM : A library for support vector machines. ACM Trans Intell Syst Technol. 2011;2(3):1–27.
Nilsson M, Ford J, Bohm S, Toftgard R. Characterization of a nuclear factor that binds juxtaposed with ATF3/Jun on a composite response element specifically mediating induced transcription in response to an epidermal growth factor/Ras/Raf signaling pathway. Cell Growth Differ. 1997;8(8):913–20.
Li DJ, Verma D, Mosbruger T, Swaminathan S. CTCF and Rad21 act as host cell restriction factors for Kaposi's sarcoma-associated herpesvirus (KSHV) lytic replication by modulating viral gene transcription. PLoS Pathog. 2014;10(1):e1003880.
Trapnell C, Roberts A, Goff L, Pertea G, Kim D, Kelley DR, Pimentel H, Salzberg SL, Rinn JL, Pachter L. Differential gene and transcript expression analysis of RNA-seq experiments with TopHat and cufflinks. Nat Protoc. 2012;7(3):562–78.
Trapnell C, Williams BA, Pertea G, Mortazavi A, Kwan G, van Baren MJ, Salzberg SL, Wold BJ, Pachter L. Transcript assembly and quantification by RNA-Seq reveals unannotated transcripts and isoform switching during cell differentiation. Nat Biotechnol. 2010;28(5):511–5.
This work was supported by National Natural Science Foundation of China [NO. 61462068, NO. 60963015 to LZ, NO.31460234 to QL, NO. 91730301, NO. 61671444, NO. 61621003 to YW] and Inner Mongolia Autonomous Region Natural Science Foundation [NO. 2014MS0103 to LZ]. YW is also supported by the Strategic Priority Re- search Program of the Chinese Academy of Sciences (XDB13050100).
Publication costs are funded by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB13000000).
The genomic coordinates of the Hg19 human Refseq genes are available in UCSC (http://genome.ucsc.edu/cgi-bin/hgTables), and the ChIP-seq data of TF/HM and RNA-seq of in GM12878 and K562 are available in ENCODE Consortium (http://genome.ucsc.edu/ENCODE/dataMatrix/encodeDataMatrixHuman.html).
About this supplement
This article has been published as part of BMC Genomics Volume 19 Supplement 10, 2018: Proceedings of the 29th International Conference on Genome Informatics (GIW 2018): genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-19-supplement-10.
School of Physical Science and Technology, Inner Mongolia University, Hohhot, Inner Mongolia, 010021, China
Lirong Zhang, Gaogao Xue, Junjie Liu & Qianzhong Li
CEMS, NCMIS, MDIS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, China
School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China
Center for Excellence in Animal Evolution and Genetics, Chinese Academy of Sciences, Kunming, 650223, China
Lirong Zhang
Gaogao Xue
Junjie Liu
Qianzhong Li
LZ, GX and QL initiated and designed the study, conceived the analysis procedure, carried out data analysis, and wrote the drafted manuscript. JL and YW participated in results discussion. All authors participated in writing manuscript. All authors read and approved the final manuscript.
Correspondence to Lirong Zhang or Qianzhong Li or Yong Wang.
Additional file
Additional file 1:
Table S1. The brief introduction of two cell lines. Table S2. Transcription factors associated with cancer in the 55TFs. Table S3. The definition of the average overlap ratio for TF1 with m peaks. Figure S1. The overlap ratios of 11 HMs with 55 TFs in GM12878. Figure S2.The distribution of gene FPKM values in GM12878 and K562 (DOCX 52 kb)
Zhang, L., Xue, G., Liu, J. et al. Revealing transcription factor and histone modification co-localization and dynamics across cell lines by integrating ChIP-seq and RNA-seq data. BMC Genomics 19, 914 (2018). https://doi.org/10.1186/s12864-018-5278-5
Co-localization | CommonCrawl |
Writing Efficient and Dynamic Queries[ Edit ]
From Learn SQL for Data Analytics course
Dynamic Queries
We have learned how to write queries to analyze data and perform simple calculations. In this section, we will take this a step further. We will use SQL functions to create custom reports based on our data and will learn how to analyze trends in our data.
Use CASE statements to structure data and create new attributes.
Combine multiple subqueries into one using "AS."
Temporary Tables and Subqueries.
Determining data trends through advanced reporting.
SQL Fiddle Recap
If you already remember how to use SQL Fiddle, skip this section.
SQL Fiddle is an online tool that let's you run SQL Queries. It is a great tool to practice what we will learn in this article. Similar to database we will have tables in SQL and then write queries to extract data from those tables. The left panel is the Schema Panel where we will build table and add data to those tables. The right panel is where we will write SQL code to extract data from those tables and run calculations on those values. The bottom panel is where you will see the results of your query. Since we can't run a query without tables, the right and lower panel will be grayed out until you build a schema aka create tables. Schema can also have other objects in addition to tables which we will cover later.
Loan Sales Table Summary
Assume you're working for a bank's division that sells loans. The database has information about all the loans that have been approved in the past. Below is an overview of the fields in the database:
Create Loan Sales Table
Copy and paste the following schema into the Schema panel and click on "Build Schema". This will create the Loan Sales table to run all queries in this exercise.
CREATE TABLE Loan_Sales
(`ID` int, `Funded_Amount` int, `Term` int, `Interest_Rate` decimal(6,4), `Grade` varchar(125), `Loan_Status` varchar(18), `State` varchar(2), `Employment_Length` varchar(9), `Home_Ownership` varchar(8), `Loan_Purpose` varchar(18), `Loan_Utilization` decimal(6,4))
INSERT INTO Loan_Sales
(`ID`, `Funded_Amount`, `Term`, `Interest_Rate`, `Grade`, `Loan_Status`, `State`, `Employment_Length`, `Home_Ownership`, `Loan_Purpose`, `Loan_Utilization`)
(1, 16000, 3, 0.0797, 'A', 'Current', 'TX', '10+ years', 'RENT', 'debt_consolidation', 0.473),
(2, 4800, 3, 0.0532, 'A', 'Current', 'KY', '2 years', 'MORTGAGE', 'medical', 0.186),
(3, 14875, 3, 0.0735, 'A', 'Current', 'MN', '2 years', 'OWN', 'credit_card', 0.32),
(4, 9600, 3, 0.1199, 'B', 'Current', 'NC', '10+ years', 'MORTGAGE', 'debt_consolidation', 0.923),
(5, 7800, 3, 0.1091, 'B', 'Current', 'MO', '< 1 year', 'RENT', 'debt_consolidation', 0.669),
(6, 4000, 3, 0.2291, 'E', 'Current', 'NC', '5 years', 'RENT', 'debt_consolidation', 0.373),
(7, 10000, 3, 0.0672, 'A', 'Current', 'PA', '2 years', 'MORTGAGE', 'small_business', 0.444),
(8, 16000, 5, 0.1262, 'C', 'Current', 'CA', '7 years', 'MORTGAGE', 'debt_consolidation', 0.294),
(9, 10000, 3, 0.1091, 'B', 'Current', 'MN', '10+ years', 'MORTGAGE', 'other', 0.726),
(10, 3500, 3, 0.0797, 'A', 'Current', 'LA', '5 years', 'OWN', 'other', 0.278),
(11, 20000, 5, 0.1262, 'C', 'Current', 'OH', '1 year', 'MORTGAGE', 'debt_consolidation', 0.614),
(12, 3000, 3, 0.0672, 'A', 'Current', 'PA', 'n/a', 'RENT', 'debt_consolidation', 0.2),
(13, 6000, 3, 0.0944, 'B', 'Current', 'CA', '10+ years', 'RENT', 'medical', 0.486),
(14, 25000, 5, 0.1262, 'C', 'Current', 'NY', '1 year', 'RENT', 'credit_card', 0.494),
(15, 4000, 3, 0.1042, 'B', 'Current', 'NY', '10+ years', 'RENT', 'credit_card', 0.337),
(16, 12000, 3, 0.0672, 'A', 'Current', 'CT', '10+ years', 'OWN', 'vacation', 0.043),
(17, 3000, 3, 0.0672, 'A', 'Current', 'KY', '2 years', 'RENT', 'other', 0.157),
(18, 12000, 5, 0.2872, 'F', 'Current', 'NY', '6 years', 'RENT', 'debt_consolidation', 0.565),
(19, 11200, 3, 0.0608, 'A', 'Current', 'NJ', '6 years', 'OWN', 'other', 0.19),
(20, 8400, 3, 0.0735, 'A', 'Current', 'NC', '1 year', 'MORTGAGE', 'credit_card', 0.548),
(21, 12000, 5, 0.1042, 'B', 'Current', 'FL', 'n/a', 'MORTGAGE', 'debt_consolidation', 0.241),
(22, 8000, 3, 0.0944, 'B', 'Current', 'NY', '10+ years', 'RENT', 'major_purchase', 0.06),
(23, 30000, 3, 0.0532, 'A', 'Current', 'TN', '10+ years', 'MORTGAGE', 'credit_card', 0.386),
(24, 7500, 3, 0.0672, 'A', 'Current', 'MT', '10+ years', 'OWN', 'medical', 0.371),
(25, 10000, 3, 0.1199, 'B', 'Current', 'TX', '2 years', 'MORTGAGE', 'debt_consolidation', 0.079),
(26, 20000, 5, 0.3079, 'G', 'Current', 'VA', '10+ years', 'MORTGAGE', 'debt_consolidation', 0.79),
(27, 10000, 3, 0.0797, 'A', 'In Grace Period', 'FL', '5 years', 'MORTGAGE', 'other', 0.336),
(28, 20000, 5, 0.0735, 'C', 'Late (31-120 days)', 'VA', '10+ years', 'MORTGAGE', 'home_improvement', 0.218),
(29, 10000, 3, 0.0944, 'A', 'Current', 'KS', '< 1 year', 'MORTGAGE', 'vacation', 0.177),
(30, 19000, 3, 0.1199, 'A', 'Current', 'CA', '6 years', 'MORTGAGE', 'debt_consolidation', 0.374),
(31, 8000, 3, 0.0993, 'B', 'Current', 'FL', 'n/a', 'OWN', 'other', 0.405),
(32, 3200, 3, 0.2, 'B', 'Current', 'MI', '6 years', 'MORTGAGE', 'other', 0.656),
(33, 10000, 3, 0.0608, 'B', 'Current', 'NC', '10+ years', 'MORTGAGE', 'debt_consolidation', 0.461),
(34, 10000, 3, 0.0797, 'A', 'Fully Paid', 'NJ', '10+ years', 'MORTGAGE', 'debt_consolidation', 0.074),
(35, 10000, 5, 0.1806, 'D', 'Current', 'MO', 'n/a', 'MORTGAGE', 'debt_consolidation', 0.671),
(36, 20000, 3, 0.0672, 'A', 'Current', 'SC', '5 years', 'MORTGAGE', 'debt_consolidation', 0.204),
(37, 24000, 3, 0.0993, 'A', 'Current', 'IL', '4 years', 'RENT', 'debt_consolidation', 0.067),
(38, 20000, 5, 0.1505, 'D', 'Current', 'FL', '5 years', 'MORTGAGE', 'debt_consolidation', 0.552),
(39, 20000, 3, 0.0532, 'A', 'Current', 'NC', '10+ years', 'MORTGAGE', 'credit_card', 0.44),
(40, 20000, 5, 0.1806, 'B', 'Current', 'NY', '6 years', 'OWN', 'debt_consolidation', 0.317),
(41, 13000, 5, 0.3079, 'C', 'Current', 'FL', '10+ years', 'RENT', 'debt_consolidation', 0.774),
(42, 40000, 3, 0.1262, 'A', 'Current', 'CA', '< 1 year', 'MORTGAGE', 'debt_consolidation', 0.372),
(43, 8400, 3, 0.1199, 'D', 'Current', 'GA', '4 years', 'MORTGAGE', 'debt_consolidation', 0.728),
(44, 30000, 5, 0.1408, 'G', 'Current', 'NY', '10+ years', 'RENT', 'other', 0.303),
(45, 10000, 5, 0.0608, 'C', 'Current', 'ME', '10+ years', 'RENT', 'home_improvement', 0.099),
(46, 35000, 5, 0.0797, 'B', 'Current', 'CA', '< 1 year', 'RENT', 'debt_consolidation', 0.637),
(47, 25000, 5, 0.0735, 'C', 'Current', 'MD', '< 1 year', 'RENT', 'credit_card', 0.888),
(48, 10800, 5, 0.1042, 'B', 'Fully Paid', 'NC', '5 years', 'MORTGAGE', 'medical', 0.172),
(49, 40000, 3, 0.1262, 'A', 'Current', 'TX', '6 years', 'MORTGAGE', 'other', 0.425),
(50, 12000, 3, 0.0672, 'A', 'Current', 'MN', '< 1 year', 'MORTGAGE', 'home_improvement', 0.293),
(51, 1200, 3, 0.0735, 'A', 'Current', 'IL', '3 years', 'RENT', 'debt_consolidation', 0.174),
(52, 15000, 3, 0.2, 'B', 'Current', 'WA', '3 years', 'RENT', 'debt_consolidation', 0.737),
(53, 11200, 5, 0.2388, 'F', 'Late (31-120 days)', 'AK', '4 years', 'MORTGAGE', 'medical', 0.711),
(54, 28000, 5, 0.1709, 'C', 'Current', 'TX', '< 1 year', 'MORTGAGE', 'debt_consolidation', 0.575),
(55, 20000, 3, 0.0993, 'A', 'Current', 'MI', '3 years', 'MORTGAGE', 'debt_consolidation', 0.415),
(56, 40000, 3, 0.0993, 'A', 'Current', 'MO', '5 years', 'MORTGAGE', 'credit_card', 0.608),
(57, 32000, 3, 0.1408, 'D', 'Current', 'NY', '10+ years', 'RENT', 'debt_consolidation', 0.577),
(58, 10000, 5, 0.0944, 'E', 'Current', 'AR', '10+ years', 'RENT', 'credit_card', 0.795),
(59, 8000, 3, 0.1091, 'D', 'Current', 'NY', '10+ years', 'RENT', 'other', 0.413),
(60, 10000, 3, 0.0944, 'B', 'Current', 'GA', '8 years', 'OWN', 'home_improvement', 0.338),
(61, 8000, 3, 0.0993, 'B', 'Current', 'NJ', '3 years', 'RENT', 'credit_card', 0.785),
(62, 35000, 5, 0.1602, 'C', 'Current', 'CA', '7 years', 'RENT', 'debt_consolidation', 0.29),
(63, 25600, 3, 0.1408, 'B', 'Current', 'VA', '10+ years', 'MORTGAGE', 'debt_consolidation', 0.454),
(64, 22725, 3, 0.1042, 'B', 'Current', 'TN', '7 years', 'OWN', 'debt_consolidation', 0.875),
(65, 10000, 3, 0.0993, 'B', 'Current', 'NJ', '3 years', 'RENT', 'debt_consolidation', 0.439),
(66, 40000, 3, 0.1042, 'B', 'Current', 'WA', '3 years', 'RENT', 'small_business', 0.335),
(67, 10000, 5, 0.2582, 'C', 'Current', 'CA', '4 years', 'MORTGAGE', 'home_improvement', 0.174),
(68, 40000, 5, 0.0944, 'C', 'Current', 'PA', '10+ years', 'OWN', 'other', 0.559),
(69, 15000, 3, 0.1359, 'B', 'Current', 'MN', '4 years', 'MORTGAGE', 'credit_card', 0.913),
(70, 27000, 5, 0.1505, 'B', 'Current', 'TX', '3 years', 'RENT', 'debt_consolidation', 0.358),
(71, 25000, 5, 0.001042, 'B', 'Current', 'TX', '10+ years', 'RENT', 'debt_consolidation', 0.524),
(72, 16000, 5, 0.002582, 'E', 'Current', 'MS', '10+ years', 'RENT', 'other', 0.91),
(73, 5000, 3, 0.000944, 'B', 'Current', 'AL', '8 years', 'OWN', 'small_business', 0.442),
(74, 24000, 5, 0.001359, 'C', 'Current', 'NY', '9 years', 'RENT', 'debt_consolidation', 0.877),
(75, 21000, 5, 0.001505, 'C', 'Current', 'FL', '2 years', 'MORTGAGE', 'debt_consolidation', 0.939)
To approve the loan, the bank will look at a borrower's current employment, credit score (grade), home ownership status, and the purpose of their loan. They gather such details on the borrower to model the possibility of a default. If your credit score is low, the bank will assign a lower grade to you. For a bank, a low credit score means that the borrower is a high risk client and has a lower possibility of paying back the loan in time. This would mean that the bank would offer a high interest rate to offset the risk. Home ownership is another indicator. Risk is highest when you rent your home and lowest when you own a home. Such indicators are often gathered by the bank and stored in the database similar to the Loan Sales table above.
You may notice that unlike our previous examples, we don't have client names in this dataset. Each client is only referenced using an ID. Most financial institutions will not have client names in each table. This is to ensure privacy and reduce risk of exposing the client's personal information. Information like name, social security number and bank details will be saved in a separate table. Only a select few will have access to it. Each client will be referenced by ID in all other tables. For any project, it is extremely important that you take time to understand the data and how different tables interact with each other.
Let's do a simple select statement to see the data.
select * from loan_sales;
We will go through several exercises to learn how to structure data and build advanced reports using case statements and subqueries. We will also practice preparing data analytics reports.
Case Statements
The best way to understand case statements is by example, so we'll just jump right in to our first exercise and understand the motivation behind these statements.
Exercise 1: Loan Sales Report by Region:
You will have to categorize sales by the following regions: Northeast, Northwest, Southeast, Southwest and Far North. For example, if the state is Texas, then it should be put in the Southeast Region.
Mapping Table for Region
The best way to achieve this in SQL is using case statements. Case statements are a series of if-else statements. In our example, we have 5 categories. We will provide the list of states for each category. The second part to the case statement is the else statement. The ELSE clause is specified to provide a result when none of the conditions are met. Note that you must end each case statement by writing END.
Now, this statement will generate data that's not in the table, so you can use AS column_name to give it a custom name.
CASE expression
WHEN condition_1 THEN result_1
WHEN condition_N THEN result_N
ELSE result
In the code below, you see that we've mentioned the list of states that map to each region. Each WHEN statement maps states to a region. If neither fits, then it will give out a message 'Region is not specified'. The results will assign a region to each state per the mapping specified in the code.
select ID, state,
WHEN state in ('ME','IL','MI','KS','VA','NJ','CT','NY','OH','PA','MO','MN') THEN 'Northeast Region'
WHEN state in ('AL','MS','AR','MD','GA','SC','TN','FL','LA','NC','KY','TX') THEN 'Southeast Region'
WHEN state in ('CA','MT','WA') THEN 'Northwest Region'
WHEN state in ('AZ','NM') THEN 'Southwest Region'
WHEN state in ('AK') THEN 'Far North Region'
ELSE 'Region is not specified'
END AS Region
from loan_sales;
Exercise 2: Assigning Risk Grade
Let's assign a risk rating to each loan based in the grade in the table. Here is the mapping table:
Before you read further, think about how you would write the query taking help from our first exercise. The result should show the ID, Grade and the mapped Risk Rating. Each 'WHEN' statement maps grade from the loan sales table to a risk rating per the mapping table above.
select id, Grade,
WHEN grade in ('A','B') THEN 'Low Risk'
WHEN grade in ('C','D') THEN 'High Risk'
WHEN grade in ('E','F') THEN 'Very High Risk'
WHEN grade = 'G' THEN 'Junk'
ELSE 'Missing Grade'
END AS Risk_Rating
If grade is missing or is not between A-G, then Risk Rating will show as missing grade.
Exercise 3: Loan Amount by Region and Risk Grade
Here you will use multiple case statements in a single query. Let's put it all together by showing client information along with assigned risk rating and mapped region. You can have multiple case statements in one query. Let's create report only for loans that are still current.
select ID, concat('$',format(Funded_Amount,'###,###,###')) AS Loan_Amount,
END AS Risk_Rating,
END AS Region,
WHEN Term = 3 THEN 'Short Duration Loan'
WHEN Term = 5 THEN 'Long Duration Loan'
ELSE 'Not enough data to calculate loan term'
END AS Loan_Term,
Loan_Status
from loan_sales
where Loan_Status = 'Current';
Exercise 4: Using Case Statements to Update Table
Let's assume that bank decides to no longer capture the grade of each loan and instead wants to use the Risk Rating method we used above. In order for the data to be consistent, you want to go back and update records. For example, if grade is A, you want to update the record to now show 'Low Risk'. We will recycle the case statement sql code from exercise 7.
Review Querying Large Databases in SQL article for a refresher on update table. The logic is you want to specify function (update), table name (loan sales), field that will be updated (grade) and value it should be set to (Risk Rating Classification using case statement).
Query for Schema:
update loan_sales
set grade =
Note: If you get this error in the schema: 'Request Entity too large'; delete around 20 rows from the table and try running above query. This is a SQL fiddle constraint.
Query for SQL Editor:
You have replaced the A-G Grade in the Table to the Risk Rating classification.
Exercises to Analyze Trends
Exercise 1: Average Interest Rate by Grade
Let's learn to create interesting reports to understand the data presented in the table. You are asked to find what the average interest rate is by grade. The interesting thing to note here is that we perform both a calculation and concatenation when specifying the field that should be pulled into the report. Interest rate from the table needs to be multiplied by 100 to show as percentage. For example, 0.3 means 30%, so we need to multiply by 100. You will add % sign at the end using a concat to make it look more professional.
select Grade, concat(format(avg(Interest_Rate*100), 2),'%') as Interest_Rate from loan_sales
group by Grade;
As you can see in the result, a low grade translates high interest rate.
Exercise 2: Maximum and Minimum Interest Rate by Grade
In this report, you will use the max and min function to dig deeper into the data. You've already seen the average interest rates but now let's see the minimum and maximum value for each grade.
select Grade, concat(format(min(Interest_Rate*100),2),'%') as Min_Interest_Rate,
concat(format(max(Interest_Rate*100),2),'%') as Max_Interest_Rate
Here you see that the numbers have a more interesting story than we saw in the previous exercise. Interest rate can be lower for grade 'B' than grade 'A'. This shows you that there are other factors in deciding the final interest rate other than just the grade. Other factors could be home ownership, loan purpose, loan utilization, and length of employment. This highlights the reason we need several reports to slice and dice data in different permutations to get a better understanding of it.
Exercise 3: Interest Rate Range by Risk Rating
Here you will show the maximum and minimum interest rate as a range using the concat function. You will also show interest rate not by each grade, but grouped by risk rating. This is a great exercise to use case statement, concat function, number formatting, basic calculation and group by clause. We achieved all of this is less than 10 lines of code.
END AS Risk_Rating, concat(format(min(Interest_Rate*100),2),'%',' - ',format(max(Interest_Rate*100),2),'%') as Interest_Rate_Range
group by Risk_Rating
Exercise 4: Client Information
Let's provide more client information from the loan table. Since ID is a number, you can use concat to customize the ID column. We will add 1000 to ID and add 'Client' before the ID number. For example, instead of showing ID as 1, the result table will show ID as Client1001. This technique is often used to customize reports while keeping ID as a number in the database. The reason we prefer to keep ID as a number and not string is to save space and make it easier to modify data in the future.
This report gives us more color on each client. Such reports are often run by both the sales and marketing teams to get a better understanding of the clientele and model a target audience.
select concat('Client',1000+ID) as Client_ID, Funded_Amount, State, Employment_Length, Home_Ownership, Loan_Purpose, Loan_Utilization
Subquery
A subquery is like a nested query. It is an extremely powerful tool and can be used to perform calculations in multiple steps. You can perform more data analysis and manipulations on the go without the need to create tables. At most companies, you would need approval from a database administrator to create a new table in the schema. This could take several days to weeks. Subqueries let you perform intermediate calculations without the need for extra tables.
Exercise 1: Aggregate Loan Amount by Region
Let's aggregate the funded amount for each region. We will go over nested select statements in this section which is also referred to as a subquery.
The inner select statement will provide a result. The outer select statement will provide a result not from a table in the database but from the result of the inner select statement. You can also perform calculations in the outer select statement. The inner select statement is like a subset of the original table with or without extra fields and has to be given a name to differentiate it from the original table.
In our example, the inner select statement provides the ID, state, region (mapped using case statement) and funded amount. The outer select statement will perform a calculation on this result. It will sum all funded amount by region using a group by clause. An easy way to think of it is that we are replacing table_name in a select statement with an entire other simple statement. Here, we will give the inner select statement a name: Region_Table. Also note, that it's always best practice to add commas to the loan numbers.
select Region, format(sum(Funded_Amount),'###,###,###') AS Funded_Amount
(select ID, Funded_Amount, state,
from loan_sales) Region_Table
group by region;
Exercise 2: Loan Utilization by Grade
Loan utilization is another metric used by banks. It is the amount of credit the client is currently using relative to all available revolving credit. For example. let's assume the total credit available across all credit cards is $25,000 and your current total bill across all accounts is $2500. The loan utilization would be \frac{2500}{25000} * 100 = 10\%. In this query we are going to look at the relationship between the risk rating you've assigned and borrower's current loan utilization.
To do that, we will use a subquery. Here we recommend you take a step-by-step approach to build the query. First, let's write the subquery
Initial SubQuery:
select ID,
Initial Result:
select Rating_Table.Risk_Rating, (avg(loan_utilization)*100) AS Average_Loan_Utilization
from loan_sales,
-- SubQuery
) Rating_Table
-- Always equate IDs when using subquery to ensure accurate results
where loan_sales.ID = Rating_Table.ID
group by Rating_Table.Risk_Rating
Make sure to equate the two IDs on line 16. What would happen if we didn't do that? Let's take a look.
-- where loan_sales.ID = Rating_Table.ID
As you can see, we would have gotten an incorrect result!
Temporary Table
All tables are created in the schema and queries are written in the SQL editor. A temporary table is a special type of table that allows you to store a temporary result set, which you can reuse several times in a single session. Below are some features of the temporary table:
When writing an advanced query with multiple joins and sub queries it is handy to use a temporary table. If you notice, that there is a subquery that is being used multiple times, it is helpful to create a temporary table. You can simply reference the temporary table instead of the subquery. This will also improve query's performance.
It is deleted automatically once you close the session. Since online SQL editors don't have sessions, you cannot experience it on SQL Fiddle.
It is only available and accessible to the user who creates it. In a conventional database like Oracle SQL, there are several users and every time a user logs in a session is created. Any change made by a user is saved for everyone. For example, if you created a new table or updated data in a table. This addition/deletion is permanent for all users. Temporary tables are an exception. It's only for the user who created it. Note, in the same session temporary tables can't share a name.
If you want to delete the temporary table in the same session it was created, you must use Drop Temporary Table Table_Name. It is best practice to delete temporary tables before exiting the session.
Exercise for Temporary table:
Let's re-do example 2 using temporary table. Let's put the case statement in a temporary table as shown below.
CREATE TEMPORARY TABLE TABLE_NAME
--followed by a select statement
CREATE TEMPORARY TABLE LOAN_SALES_TEMP
select ID, Funded_Amount, state,
Here, instead of writing a subquery, we simply refer to the new temporary table. Modified query below.
from loan_sales_temp
It should be the same result as exercise 2.
SubQuery or Temporary Table?
While writing queries how can you decide whether you should use temporary tables or sub queries. It is not always an obvious choice. From our experience, temporary tables are most useful when creating quick ad-hoc reports or for debugging. If you are creating a query or report to use for regular reporting then it's best to use subqueries. If you are creating a dataset for analysis or a quick ad-hoc report especially a complex report, a temporary table could be suitable.
Category: Data Analytics
Mark completed
Aggregating Data in SQL Part 2
Nirali Shah
Prabhav JainModerator of Data Analytics.
Part of course:
Learn SQL for Data Analytics
Writing Efficient and Dynamic Queries | CommonCrawl |
Size control in mammalian cells involves modulation of both growth rate and cell cycle duration
Clotilde Cadart ORCID: orcid.org/0000-0001-7143-27061,2,
Sylvain Monnier ORCID: orcid.org/0000-0003-1087-04591,3,
Jacopo Grilli4,5,
Pablo J. Sáez ORCID: orcid.org/0000-0003-0521-94261,2,
Nishit Srivastava1,2,
Rafaele Attia1,2,
Emmanuel Terriac1 nAff11,
Buzz Baum6,7,
Marco Cosentino-Lagomarsino ORCID: orcid.org/0000-0003-0235-04458,9,10 &
Matthieu Piel ORCID: orcid.org/0000-0002-2848-177X1,2
Nature Communications volume 9, Article number: 3275 (2018) Cite this article
Cell growth
This article has been updated
Despite decades of research, how mammalian cell size is controlled remains unclear because of the difficulty of directly measuring growth at the single-cell level. Here we report direct measurements of single-cell volumes over entire cell cycles on various mammalian cell lines and primary human cells. We find that, in a majority of cell types, the volume added across the cell cycle shows little or no correlation to cell birth size, a homeostatic behavior called "adder". This behavior involves modulation of G1 or S-G2 duration and modulation of growth rate. The precise combination of these mechanisms depends on the cell type and the growth condition. We have developed a mathematical framework to compare size homeostasis in datasets ranging from bacteria to mammalian cells. This reveals that a near-adder behavior is the most common type of size control and highlights the importance of growth rate modulation to size control in mammalian cells.
There is little consensus about the way mammalian cells control their size1,2. Studies of single-celled yeast and bacteria have revealed that in order to achieve size homeostasis, cells must modulate the amount of growth produced during the cell cycle such that, on average, large cells at birth grow less than small ones. Size homeostasis can be exemplified by three simple limit cases: the sizer, the adder and the timer. Perfect size control has been reported for the fission yeast, S. Pombe3, where a size threshold (sizer) was proposed to control the entry into mitosis4. By contrast, an "adder" mechanism relies on the addition of a constant volume at each cell cycle that is independent of initial size5,6, causing cells to converge on an average size after a few generations. This behavior has been reported for several types of bacteria, cyanobacteria and in budding yeast7,8,9,10,11. Finally, if cells grow exponentially for a constant amount of time (a "timer" mechanism), large cells grow more than smaller ones and sizes diverge rapidly. Alternatively, if cells grow linearly, a timer results in cells growing by the same amount each cell cycle, therefore maintaining size homeostasis12.
In bacteria and yeast, the development of high-throughput single live cell imaging has provided a wealth of measurement which, together with the development of theoretical models enabled great progress in the characterization of size control in these organisms11,13,14,15,16,17,18,19,20. Similar progress has yet to be made in mammalian cells which have complex and fluctuating shapes. To date, most studies on mammalian cells have relied on population level measurements12,21,22,23,24. These include attempts to extrapolate growth dynamics from size measurements at fixed time points across a population24,26,27. Recently, a variety of parameters such as cell dry mass26,28,29, buoyant cell mass30 and cell density31, have been used as proxies for size at the single-cell level, mostly through indirect techniques. Among these recent studies are measurements of single-cell size at specific times in the cell cycle32 or through complete cell cycles28,29,30. Although most data in unicellular organisms were obtained on cell volume, and most size-sensing mechanisms currently debated are thought to involve concentration-dependent processes19,33,34,35, measurements of volume trajectories on single cycling mammalian cells have not been reported yet and it is thus unclear whether volume and mass are similarly relevant for size control. Moreover, the paucity of direct and dynamic measurements on single live cells has limited the identification of regulatory processes leading to size control in mammalian cells.
Similarly to unicellular organisms, mammalian cells have been hypothesized to control their size via a modulation of cell cycle duration. Specifically, an adaptation of G1 duration as a function of cell size has been proposed by a series of indirect21,23,25,37 and one direct32 work. Other studies on mammalian cells have reported negligible changes in cell cycle timing and have hypothesized that changes in growth speed may contribute to cell size control24,27 (we define here growth speed as the evolution of size as a function of time, and growth rate as the evolution of growth speed as a function of size). Direct observation of a convergence of growth speed at the G1/S transition was seen in lymphoblastoid cells30 but how this leads to an effective cell size homeostatic behavior was not characterized. The idea that growth speed modulations could play a role in mammalian cells size control was not tested directly and its contribution to overall size homeostasis has not been compared to that of time modulation. Moreover, the contribution of S-G2 duration in size control and the effective homeostatic behavior from birth to mitosis has not been characterized yet.
To address these questions as directly as possible, we recently developed two methods to precisely measure the volume of large numbers of single live cells over several days37,38,39. In this study, we used these tools to track single-cell volume growth over complete cell cycles. We characterize the homeostatic behavior of a variety of cultured and primary mammalian cells and show that they behave like adders (or near adders). We then quantify the modulation of time (in G1 and S-G2) and growth rate that contribute to size control. Finally, we develop a quantitative framework that characterizes the relative contributions of timing and growth modulation to size homeostasis from bacteria to mammalian cells.
Single-cell volume measurement over entire cell division cycles
The homeostatic behavior of cells is identified by assessing the relation, for single cells, between their size at mitotic entry and their size at birth. This relation has never been reported for freely growing mammalian cells in culture.
To establish this relation, it is necessary to track single proliferating cells and measure the volume of the same cell at birth and at mitotic entry. We implemented two distinct methods to obtain these measures. First, we grew cells inside microchannels of a well-defined cross-sectional area (Supplementary Fig. 1a, and ref.37.), as was recently reported for immune cells32. In such a geometry, dividing cells occupied the whole section of the channels and had a cylinder shape, thus we could infer their volume from their length. The second method we used is a Fluorescence eXclusion measurement method (FXm) to measure volume38,39 (Fig. 1a, Supplementary Movie 1). In this technique, cells are seeded in a chamber of known height and a fluorescent probe that does not enter the cell is added to the culture media. The fluorescence intensity is negatively proportional to the height of the cell and the exact volume of the cell can therefore be calculated (Fig. 1a). In previous work, we validated the FXm method and showed that it allows single-cell volume measurement, independently of cell shape38,39. Here, we optimized the method for long term recording and automated analysis of populations of growing cells (controls are presented in Supplementary Fig. 1b and Method). This method has several advantages (reviewed in ref.40): compared with microchannels, it does not require growing cells in a very confined environment, which is thought to constrain growth to a linear pattern32, and it is more precise. It also produces complete growth trajectories for single cells (Fig. 1b, c and Supplementary Fig. 1c). Visual inspection of the movies was used to determine key points in the cell division cycle for each single-cell tracked. Volume at birth was defined as the volume of a daughter cell 40 min after cytokinesis onset, while volume at mitotic entry was defined as volume of the same cell 60 min prior to the next cytokinesis onset (Fig. 1b, Supplementary Fig.1d,e). Analysis of growth speed as a function of size, for a large number of single cells and cell aggregates showed that the average growth speed increased linearly with cell size (Supplementary Fig. 1f). This supports that on average cells grew faster than linearly and is compatible with a (mean) exponential mode of growth, as previously reported in some cases for freely growing cells26,27,29,30 (note that other modes of growth that are super-linear, may also describe our data, as explained in Supplementary Note 1, but for simplicity we approximate to exponential growth).
Single-cell volume tracking over entire cell division cycles. a Principle of the fluorescence exclusion volume measurement method (FXm). Left: top view of the measurement chamber used for 50 h long time-lapse acquisitions (see Methods). Right: side view of the chamber and principle of the measurement. Fluorescence intensity at a point Ix,y of the cell is proportional to the height of the chamber minus the height hx,y of the cell at this point. Fluorescence intensity Imax is the intensity under the known height of the chamber roof hmax, where no object excludes the fluorescence. Integration of fluorescence intensity over the cell area gives the cell volume Vcell after calibrating the fluorescence intensity signal α = (Imax − Imin)/hmax (see Methods). b Sequential images of a HT29-wt cell acquired for FXm. Mitosis and birth are defined as the time points 60 min before and 40 min after cytokinesis respectively (see Methods). The white dashed circle indicates the cell measured in Fig. 1c, the colored lines indicate the time points highlighted by circles of the same color in Fig. 1c. Time is in hours:minutes. Scale bar is 20 µm. c Single HT29-wt cell growth trajectory (volume as a function of time) and key measurement points (see Methods). The time points shown in Fig. 1b and underlined in gray, red, or yellow are indicated by points of matching colors on the curve: the gray points correspond to volume at mitotic entry, the red points correspond to volume at cytokinesis and the yellow points to volume at birth. ΔtTOT is the total duration of the cell division cycle from birth to mitosis and ΔtTOT is the total added volume. d Average growth speed for three independent experiments with HT29-wt cells. n = 39 (exp. 1), n = 46 (exp. 2), n = 47 (exp. 3). The p-values are the result of a pairwise t test comparing the means. See also Supplementary Figure 1 and Supplementary Movie 1
We studied two types of cancerous epithelial cell lines (HT29 wild-type (HT29-wt) and HT29 expressing hgeminin-mcherry (HT29-hgem), HeLa expressing hgeminin-GFP (HeLa-hgem) and HeLa expressing MyrPalm-GFP H2B-mcherry (HeLa-MP)), one B lymphoblast cancerous cell line (Raji), one non-cancerous aneuploid epithelial cell line (MDCK expressing MyrPalm-GFP (MDCK-MP)), and one hTERT-immortalized epithelial cell line (RPE1). For each experiment performed, the dataset was checked for quality: we verified that the distribution of volumes at birth and the average growth speed did not change throughout the experiment, and that these values did not change from one experiment to another (Fig. 1d and Supplementary Fig. 1g). Note that we kept one dataset which showed a significant, but small, decrease in volume through the course of the experiment, because despite optimization, we could not avoid some internalization of dextran by these cells (Supplementary Fig. 1g, HeLa-hgem cells, Supplementary Movie 1). This decrease was however below 10% at the end of experiments lasting 40 h, and thus could not impact our analysis. We were able, with these methods, to produce fully validated high-quality datasets of single-cell volume over entire cycles, which can be further used to ask elementary questions on volume homeostasis for proliferating cultured mammalian cells.
A near-adder behavior is observed in mammalian cells
The effective homeostatic behavior can be assessed phenomenologically by quantifying the relation between added volume during the cell cycle and volume at birth (Fig. 2a). If cells double their volume (i.e., in the case of exponentially growing cells with a timer), the added volume is equal to the volume at birth, thus the two values linearly correlate with a slope of 1, and the final vs. initial volume plot shows a slope of 2. On the other hand, if cells are perfectly correcting for differences in size (sizer), the added volume is smaller for larger cells, and the slope of this plot is negative, while the final volume is identical for all cells independently of their initial volume.
Adder or near-adder behavior in cultured mammalian cells. a Left: total volume gained during one cell division cycle ΔtTOT vs. volume at birth Vbirth for wild-type HT29 cells (N = 3). Right: volume at mitosis Vmitosis vs. Vbirth. Dashed gray lines show the expected trends in case of a sizer, an adder, and a timer. Blue lines: linear fit on the binned data weighted by the number of observations in each bin. b Left graph: plot of volume at mitosis vs. volume at birth rescaled by the mean volume at mitosis for various cultured mammalian cell lines. Ideal slopes for stereotypical homeostatic behaviors are shown as black and gray lines. The points are median bins (see Supplementary Fig. 2b for equivalent graphs with single points). For each cell type, a linear fit Vmitosis=aVbirth+b is made on the bins weighted by the number of observation in each bin. Right table: estimates from the linear regression for each cell type: a (slope coefficient), s.e. a (standard error for a), b (slope intercept). The theoretical slope coefficients and intercepts expected in case of sizer, adder, or timer are also indicated. L1210 are mouse lymphoblastoid cells from ref.33. Apart from the L1210 cells buoyant mass, data are volumes acquired with either the FXm or the microchannel methods). c Top: scheme of a cell confined in a microchannel (nucleus in red). Bottom: sequential images of an asymmetrically dividing HeLa cells expressing MyrPalm-GFP (plasma membrane, green) and Histon2B-mcherry (nucleus, red) growing inside a microchannel. The outlines of the cell of interest and its daughters are shown with white dotted lines. Daughter cells are indicated with solid white bars. Scale bar is 20 µm. Time is hours:minutes. d Ratio of volume in pairs of sister cells at birth and mitosis for MDCK-MP and HeLa-MP cells growing inside microchannels. Control, in non-confined condition, corresponds to HeLa-hgem cells measured with FXm. A Wilcoxon signed rank test was performed to test that the median ratio was lower from birth to mitosis in each condition. See also Supplementary Figure 2 and Supplementary Movie 2
The six cell types we analyzed (HT29, HeLa, MDCK, Raji, RPE1 and L1210), behaved neither as timers nor as sizers (Supplementary Fig. 2a-c). With the exception of Raji cells, which showed a large dispersion of added volumes, and for which added volume correlated positively with volume at birth (Supplementary Fig. 2a), we instead found that added volume showed no correlation (HT29-hgem, HeLa-hgem, HeLa-MP, and MDCK-MP) or a weak negative correlation (HT29-wt and RPE1) with volume at birth (Fig. 2a, Supplementary Fig. 2a). Consistently, the volume at mitotic entry was linearly correlated to volume at birth, with a slope ranging from 0.7 to 1.2 (Fig. 2b, Supplementary Fig. 2b). This observation was also reproduced when analyzing previously published results obtained on lymphoblastoid L1210 cells (kindly shared by the authors30). (Note that in the rescaled plot shown in Fig. 2b, RPE1 and HeLa-hgem do not overlap with the other datasets because they displayed a lower overall doubling ratio Vmitosis/Vbirth (discussed in Methods and Supplementary Fig. 2d)). Thus, with the exception of Raji cells, five of the six cell lines studied here displayed an adder or near-adder type of homeostatic behavior, reminiscent of what was already described for several bacterial species and for the buds of budding yeast cells7,8,11.
In bacteria, this weak form of volume homeostasis was shown to compensate for asymmetries in sizes occurring at division5,7. A direct prediction is that, after an asymmetric division, the difference in size of the two daughter cells would be reduced by half in the following cycle, but not fully corrected. To confirm the observation of the near-adder in cells with large asymmetries in size, we artificially induced asymmetric divisions by growing two different cell types (HeLa and MDCK) inside microchannels (Supplementary Fig. 1a, Supplementary Movie 2). Confinement prevents mitotic rounding, which leads to errors in the mitotic spindle positioning and ultimately generates uneven division of the mother cell (Fig. 2c, d, refs.37,41). We then compared the asymmetry in volume, at birth and at the next mitosis, between pairs of daughter cells. For both cell types, the level of volume asymmetry at birth was higher in channels than in cells that divided outside of the channels, and was significantly reduced at entry into the next mitosis, but not completely compensated for (Fig. 2d), as predicted by a near-adder behavior. In conclusion, this first analysis revealed that most cultured mammalian cell lines display a near-adder behavior.
Primary human cells behave as near-adder
We then wondered whether the observation of the near-adder extended to primary cells and repeated our experiments on normal associated fibroblasts (NAFs) and normal human epidermal fibroblasts (NHDFs). These cells come from healthy tissues in patients, and present the advantage of not being mutated for any growth or cell cycle pathways. However, they are a complex experimental system because they are very heterogeneous and out of steady state in culture where they progressively stop dividing. As expected, NAF and NHDF were highly variable both in cell cycle duration and volume distribution (Supplementary Fig. 3a-c). In the FXm chambers, they also showed a low overall doubling ratio 〈Vmitosis/Vbirth〉 that ranged from 1.5 to 1.6 (Fig. 3a), indicating that they were not at steady state (see Methods and Supplementary Fig. 3d-f). It however remained possible to characterize their homeostatic behavior. The analysis of the relationship between volume at mitosis and volume at birth revealed that NHDF and three different samples of NAF, similar to immortalized cell lines, all behaved as adder or near-adder (Fig. 3b, Supplementary Fig. 3g-j).
Near-adder behavior in primary human cells. a Boxplot showing the distribution of over replicative growth (volume at mitosis divided by volume at birth) for three samples of NAF and NHDF primary cells. NAF-A:, n = 48, N = 2; NAF-B: n = 53, N = 2; NAF-C: n = 53, N = 2; NHDF: n = 56, N = 3. b Volume at mitosis as a function of volume at birth for three samples of NAF and NHDF primary cells. Dashed lines are visual guides for the timer timer (assuming exponential growth, slope = 〈Vmitosis/VG1/S〉, intercept = 0), adder (slope = 1, intercept = 〈ΔVS−G2〉) and sizer (slope = 0, intercept = 〈Vmitosis〉). Solid lines represent linear fits on the bins (colored squares) weighted by the number of observations in each bin
In conclusion, the adder or near-adder is the most common homeostatic behavior observed in a variety of immortalized and primary mammalian cells. Importantly, a near-adder observed at the phenomenological level does not necessarily imply the existence of a molecular mechanism "counting" added volume. The most recent findings in unicellular organisms instead suggest that the near-adder may emerge from the combination of several mechanisms acting in parallel or sequentially during the cell cycle42.
Modulation of G1 duration contributes to size control
Modulations of cell cycle duration as a function of size are the basis of size regulation in unicellular organisms. In animal cells, similarly to budding yeast33,43,44,45,46, indirect population level approaches suggested that modulation of G1 duration is important for size control21,23,36 and that this occurs through the p38-MAPK pathway36. Recent direct measurements confirmed this hypothesis, using confinement inside microchannels32, a system that caused cells to grow linearly and that did not allow the study of homeostatic behavior in S-G2 or over the whole-cell cycle. Hence these points have yet to be investigated for cells that grow in regular culture conditions, such as the FXm chambers where cells grew exponentially (Supplementary Fig. 1f).
To investigate the contribution of modulations of G1 and S/G2 phase duration in size control, we combined cell volume measurements on HT29 and HeLa cells with a classical marker of cell cycle phases, hgeminin, which accumulates in the cell nucleus at S-phase entry47 (Fig. 4a, Supplementary Fig. 4a and Supplementary Movie 3). HeLa expressing hgeminin-mcherry (HeLa-hgem) on average cycled faster than HT29 expressing hgeminin-mcherry (HT29-hgem) (Fig. 4b). This difference was mostly the consequence of a longer and more variable G1 phase in HT29-hgem (HT29-hgem, CV = 53%, HeLa-hgem, CV = 18%) while S-G2 duration showed little variation for both cell types (HT29-hgem, CV = 18%, HeLa-hgem, CV = 17%) (Fig. 4b, Supplementary Fig. 4b).
Modulation of G1 duration as a function of volume at birth. a Sequential images of HT29 cells expressing hgeminin-mcherry (top row) in an FXm chamber (bottom row). Right graph shows the quantification of hgeminin-mcherry in the cell as a function of time. Time zero corresponds to mitosis. The vertical white dashed line and arrows indicate the time at which hgeminin-mcherry becomes detectable. G1 phase (red line) spans from birth to appearance of hgeminin (G1/S transition) and S-G2 phases (green line) from G1/S to next entry in mitosis. Scale bar is 20 µm. Time is in hours:minutes. b Kernel density estimates of the duration Δt of G1 phase (red), S-G2 phase (green) and total cell cycle (blue) for both HT29-hgem and HeLa-hgem. CV is the coefficient of variation (in %). c, d Duration of G1 phase, ΔtG1 as a function of the logarithm of volume at birth (Vbirth) for HT29-hgem (N = 4) (c) and HeLa-hgem (N = 2) (d). Red dashed line and gray area are a visual guide for minimum G1 duration around 4 h. e, f Total added volume in G1 ΔVG1 as a function of volume at birth (Vbirth) for HT29-hgem (N = 4) (e) and HeLa-hgem (N = 2) (f). g, h Volume at G1/S (VG1/S) vs. volume at birth (Vbirth) for HT29-hgem (N = 4) (g) and HeLa-hgem (N = 2) (h). The dashed gray lines indicate the expected trend in the case of a timer (slope =〈VG1/S/Vbirth〉, intercept = 0), an adder (slope = 1, intercept=〈VG1/S〉) and a sizer (slope = 0, intercept = 〈VG1/S〉). i, j Cumulative frequency graph of G1 duration binned for three ranges of volumes at birth Vbirth for HT29-hgem (i) (N = 4) and HeLa-hgem (j) (N = 2). Dashed line and gray area are a visual guide for minimum G1 duration around 4 h. For the plots in c–h, individual cell measures (dots) and median bins (squares) ± s.d. (bars) are shown. Solid lines are linear regressions on the median bins weighted by the number of observations in each bin. See also Supplementary Figure 4 and Supplementary Movie 3
Despite this quantitative difference in average duration of G1, HT29-hgem and HeLa-hgem displayed common traits qualitatively. For both cell types, G1 duration and added volume in G1 correlated negatively with cell volume at birth (Fig. 4c–f), indicative of the existence of size control via a modulation of G1 duration. Consistently, the volume at the end of G1 plotted against volume at birth showed a slope below 1 (HT29-hgem: a = 0.71 ± 0.01, HeLa-hgem: a = 0.69 ± 0.01, slope ± standard error ), suggesting an intermediate strength of size control, between the adder and the sizer (Fig. 4g, h).
This analysis also suggests that there is a minimal duration of the G1 phase, an observation that reproduces recent results in microchannels32. Indeed, for HT29-hgem, smaller cells showed a wider dispersion of G1 duration while larger cells tended to spend only a minimal time in G1 (about 4 h) (Fig. 4c). This is well illustrated by the cumulative distribution functions of the time spent in G1 for three ranges of volumes at birth (Fig. 4i). HeLa-hgem which on average cycle faster, seemed, by comparison with the HT29-hgem cells, to all cycle very close to a similar minimum G1 duration (about 4 h) (Fig. 4d–j).
Together, these results provide evidence for size control of intermediate strength between the adder and the sizer in G1 that involves a modulation of G1 duration. Additionally, modulation of G1 timing appears limited by the existence of a minimum G1 duration.
Modulation of S-G2 duration in HeLa but not HT29
In order to test the existence of size control in S-G2, we repeated the same analysis as done in G1. For HT29-hgem cells, S-G2 duration was not correlated with volume at G1/S (Fig. 5a) and showed little cell-to-cell variation (Fig. 4b). This is typically indicative of a "timer" behavior. As expected from the combination of a timer and exponential growth, we found a positive correlation between added volume in S-G2 and volume at the G1/S transition (Fig. 5b) and the slope of volume at mitosis vs. volume at G1/S was very close to the expected slope for a timer (Supplementary Fig. 5a). HeLa-hgem cells showed a different behavior. For these cells, S-G2 duration was negatively correlated with volume at the G1/S transition (Fig. 5c) and added volume in S-G2 was not correlated with volume at G1/S (Fig. 5d). Hence, these cells displayed a near-adder behavior in S-G2, as confirmed by the plot of volume at mitosis vs. volume at G1/S (Supplementary Fig. 5b).
S-G2 duration is negatively correlated with volume at G1/S in HeLa but not HT29 cell. a Duration of S-G2 phase, ΔtS−G2 vs. the logarithm of volume at G1/S transition (VG1/S) for HT29-hgem (N = 4). b Added volume in S-G2 phase, ΔVS−G2 vs. volume at G1/S transition (VG1/S) for HT29-hgem (N = 4). c Duration of S-G2 phase, ΔtS−G2 vs. the logarithm of volume at G1/S transition (VG−S) for HeLa-hgem (N = 2). d Added volume in S-G2 phase, ΔVS−G2 vs. volume at G1/S transition (VG1/S) for HeLa-hgem (N = 2). e, f Added volume in S-G2, ΔVS−G2 vs. added volume in G1 (VG1) for HT29-hgem (N = 4) (e) and HeLa-hgem (N = 2) (f). Dashed black line represents the slope expected in the case of a mechanistic adder where: ΔVS−G2=〈ΔVTOT〉−ΔVG1 (slope of −1). g, h Added volume in the whole cell cycle ΔVTOT vs. volume at birth (Vbirth) for HT29-hgem (N = 4) (g) and HeLa-hgem (N = 2) (f). For all the plots in this figure, individual cell measures (dots) and median bins (squares) ± s.d. (bars) are shown. Solid line is a linear regression on the median bins weighted by the number of observations in each bin. See also Supplementary Figure 5
Our observation of some control on size in S-G2 in HeLa cells cannot be compared with previous results, which focused only on size control in G121,23,32. Following the strategy proposed by Chandler-Brown and coworkers45, we tested the hypothesis of a "mechanistic-adder", i.e., that the rate-limiting process for cell-cycle completion is the addition of a nearly constant volume from birth to mitosis. Since in this hypothesis added volume in S-G2 should perfectly match added volume in G1, so that ΔVG1+ΔVS−G2 = ΔVtot = Constant, one can test the relation of ΔVS−G1 and ΔVG1, and a slope of −1 would correspond to the mechanistic adder45. Contrary to budding yeast, for both HT29-hgem and HeLa-hgem (Fig. 5e, f), the slope was generally negative and followed a trend that might be compatible with the mechanistic adder prediction, except for a few strong outliers in HT29-hgem cells.
Thus, our analysis of growth in S-G2 revealed an unsuspected role of modulation of S-G2 duration for size control in HeLa cells, while S-G2 was closer to a timer in HT29 cells. Whether this additional size control mechanism is cell-type dependent or rather specific to faster-growing cells will require further investigation. Taken together with the analysis of G1 phase (Fig. 4), these results show that modulation of G1 and/or S-G2 duration contributes to size control in cells that on average grow exponentially but that the two cell types we studied rely differently on these mechanisms in order to achieve a similar "near-adder" effective behavior (Fig. 5g, h).
Large cells do not adapt G1 duration
Figure 4 shows a lower limit on the duration of G1 phase for the largest HT29-hgem cells (Fig. 4c–i) and fast cycling HeLa-hgem cells (Fig. 4d–j), which implies that, if growth was exponential and homeostasis mechanism limited to modulations of time, it would not be possible to have homeostasis in G1 for larger cells. To further test this, we produced larger cells at birth by arresting HeLa-hgem cells using Roscovitine, an inhibitor of major interphase cyclin dependent kinases, like Cdk248. After a 48 h block with Roscovitine, the drug was rinsed, and cells were injected in the volume measurement chamber (Fig. 6a, Supplementary Movie 4). Cells which had been treated with Roscovitine were on average 1.7-fold larger than the controls (Fig. 6b, top histogram). Analysis of steadiness and homeostatic behavior in S-G2 is shown in Supplementary Fig. 6a-d as we focus here on control in G1. As expected, large Roscovitine-treated cells displayed a shorter G1 duration (Fig. 6b, right histogram) and were on average closer to a minimal G1 duration (about 4 h), independently of their volume at birth (Fig. 6b). Surprisingly, the large Roscovitine treated cells which had lost G1 modulation grew, during G1, by a constant amount of volume which was independent of their volume at birth and on average similar to that of the control condition (Welch t test comparing the means, p= 0.2423) (Fig. 6c).
Size correction by growth-rate modulation in control and abnormal large Hela cells. a Examples of single-cell growth trajectories for HeLa-hgem cells, either control ('ctrl'), or after washout from Roscovitine treatment ('rosco') as a function of time from birth. b Duration of G1, ΔtG1 as a function of the logarithm of volume at birth (Vbirth) for HeLa-hgem cells. Results from the linear fit: control: a = −4 ± 0.1, p = 1*10−90, R2 = 0.888, n = 199, N = 2; Roscovitine: a = 0 ± 0.2, R2 = 0.019, p = 1, n = 120, N = 3. Red dashed line and gray area are a visual guide for minimum G1 duration. Top: kernel estimates of volume at birth; control: 〈log Vbirth〉=7.37, n = 231, N = 2; Roscovitine: 〈log Vbirth〉=7.86, n = 136; Welch t test comparing the means: p = 2.2×10−16. Right: kernel estimates of ΔtG1; control: 〈ΔtG1〉=7.0 h., n = 201, N = 2; Roscovitine: 〈ΔtG1〉=6.1 h, n=124, N=3; Welch t test comparing the means: p=6.5×10−7. c Added volume in G1 (ΔVG1) vs. volume at birth for HeLa-hgem cells. Results from the linear fit: control: a = −0.25 ± 0.01, p = 1×10−46, R2 = 0.706, n = 178, N = 2; Roscovitine (red line): a = 0.1 ± 0.02, p = 0.1, R2 = 0.046, n = 108, N = 3. Dashed lines represent the median added volume in G1 for the control (〈ΔVG1〉=350 µm3, n = 178) and the Roscovitine (〈ΔVG1〉=390 µm3, n = 108) condition. Right: kernel estimates of ΔVG1. Welch's t test comparing the mean added volume: p = 0.2423. d Instantaneous growth speed dv/dt in G1 as a function of volume, with bivariate kernel densities (concentric circles) and average bins for control (n = 119, N = 1) and Roscovitine (n = 49, N=2) conditions. Results from the linear fits, control: a=0.0489 ± 0.0005, p≈0, R2 = 0.78; Roscovitine: a = 0.047 ± 0.002, p = 1×10−137, R2 = 0.49. e Top: kernel density of volume at birth for control and Roscovitine treated HeLa-hgem cells grouped together. Bars represent the 20 and 80% percentiles and define three groups: cells within the 0–20% percentile (blue), 20–80% percentile (orange) and 80–100% percentile (green). Bottom: Same data as d but for the three groups analyzed separately. Results from the linear fits (lines) on the average bins (dots) for each group with nc (number of control cells) and nr (number of Rocovitine-treated cells): 0–20%: a = 0.119 ± 0.008, p = 4.1×10−5, R2 = 0.98, nc = 24, nr = 0; 20–80%: a = 0.072 ± 0.009, p = 4.88×10−5, R2 = 0.90, nc = 60, nr = 15; 80–100%: a = 0.05 ± 0.01, p = 0.00192, R2 = 0.43, nc = 3, nr = 24. For b–d, control condition ('ctrl') is in gray and Roscovitine-treated condition ('rosco') is in red. Individual cell measures (dots) as well as median (c, d) or average (d) bins (ctrl: squares, rosco: triangles) and s.d. (bars) are shown. Solid lines shows linear regression on the bins weighted by the number of event in each bin. a is alsways given as slope ± standard error. See also Supplementary Figures 6 and 7, Supplementary Movie 4
Growth-rate modulations contribute to size correction
If G1 duration is not modulated, an alternative mechanism to control size could be a modulation of growth rate. To assess the growth mode of cells in this experiment, we analyzed single cells growth curves in G1 and looked at how the instantaneous growth speed (i.e., the growth speed measured over short periods of time, dt = 90 min) correlated with volume during this period of time (see Method and Supplementary Fig. 7a-c). This showed that, for both control and Roscovitine-treated cells, and for all the range of volumes, growth speed in G1 increased linearly with volume, compatible with an exponential growth mode even for the largest cells (Fig. 6d for G1, Supplementary Fig. 7d, e for S-G2 and complete cell cycle, and Supplementary Fig. 7f relative to G1/S transition). Thus, the growth modulation that leads to size control in large cells has to be more complex than a simple switch to a linear mode of growth.
To better characterize a potential growth rate modulation, we grouped Roscovitine and control cells and repeated the plot of instantaneous growth speed as a function of volume as in Fig. 6d but defined three sub-groups of cells containing: (i) the 20% smallest cells at birth, (ii) the intermediate-sized cells and (iii) the 80% largest cells at birth (Fig. 6e). We recall here that, by definition, the slope of such plot indicates the growth rate of cells. This analysis showed that although for all ranges of size at birth growth was compatible with exponential, the slope of growth speed vs. volume decreased for larger sizes at birth, suggesting a lower growth rate for cells born larger (Fig. 6e). This conclusion holds true even without the Roscovitine condition since the first two groups of cells (the 20% smallest and intermediate sized cells) contained a majority of cells from the control condition.
In conclusion, large Roscovitine-treated HeLa cells bring further evidence of a minimum G1 duration (Fig. 6b) already suggested by the results in control HeLa (Fig. 4d–j) and HT29 cells (Fig. 4c–i). Moreover, this experiment provides a direct example of cells for which modulation of the growth rate in G1 as a function of volume at birth can contribute to size control.
Mathematical framework comparing size control across organisms
Our results show evidence of time modulation in G1, in agreement with recent findings32,36 and directly support the hypothesis that modulations of the growth rate might also contribute to size homeostasis24,25. To understand the respective contribution of growth and time modulation to the effective homeostatic process, we built a general mathematical framework that allowed us to perform a comparative analysis of size homeostasis mechanisms in mammalian cells and unicellular organisms. Our model (described in details in Supplementary Note 1) assumes that cells grow exponentially, which corresponds to the average behavior we observed in our dataset, and adopt a rate chosen stochastically from a probability distribution. This rate may depend on volume at birth (and hence contribute to size correction). Similarly, cell cycle duration may be chosen based on volume at birth and has a stochastic component. Correlations between growth rate, cell cycle duration and size at birth are accounted to linear order, motivated by the fact that such linear correlations are able to explain most patterns in existing data (at least for bacteria49). The resulting model is able to characterize the joint correction of size by timing and growth rate modulation, with a small number of parameters.
A first parameter, λ, describes how the total relative growth (log(Vmitosis/Vbirth)) depends on volume at birth. If λ=1, the system behaves like a sizer, if it is 0.5, it is an adder and if it is 0, there is no size control at all (on average, cells divide when they doubled their initial volume). This parameter can be described, for each dataset, by performing a linear regression on the plot of log(Vmitosis/Vbirth) vs. the log(Vbirth) (Fig. 7a and Equation 5 in Supplementary Note 1). The second parameter, θ, describes how cell cycle duration depends on volume at birth. This parameter can be described, for each dataset, by performing a linear regression on the plot of cell cycle duration (τ = ΔT) vs. log(Vbirth) (Fig. 7b and Equation 6 in Supplementary Note 1). If this correlation is negative (which, by choice, corresponds to a positive value of the parameter meant to describe the strength of the correction), it means that larger cells will tend to divide in shorter times, hence that modulation of timing contributes to size correction. Finally, the third parameter, γ, describes the link between initial size and a variation in growth rate with respect to its mean value. Similarly, if γ is positive, modulations of growth rate positively contribute to size control (Fig. 7c, Equation 4 in Supplementary Note 1). γ can be obtained by a linear regression when the corresponding measurements are available (in datasets from bacteria8,50,52,52), or estimated from the values of λ and θ for yeast and animal cells datasets where single cell growth rate was not available. The validity of this estimation was verified on the bacteria datasets (Supplementary Fig. 8a-b and Supplementary Note 1).
Contribution of growth and time modulation in overall size control. a Replicative growth, log(Vmitosis/Vbirth) vs. logarithm of volume at birth log(Vbirth) for HT29-wt cells. The slope coefficient of the linear regression gives −λ and indicates the strength of the effective size control (−λ = −0.5 ± 0.002, R2 = 0.85, n = 132, N = 3). b Cell cycle duration τ vs. initial volume log(Vbirth) for HT29-wt cells. The slope coefficient of the linear regression gives −〈τ〉〈θ〉, with 〈τ〉 the average cell cycle duration and θ the strength of control by time modulation. A positive value of θ corresponds to a positive effect on size control (〈−τ〉θ = −7 ± 0.2, R2 = 0.88, n = 163, N = 3. c Growth rate α vs. volume at birth log(Vbirth), for a dataset on bacteria from ref.51. The slope coefficient of the linear regression gives −〈α〉〈γ〉, with 〈α〉 the average growth rate and γ the control due to growth rate modulations. A positive value of γ corresponds to a positive effect on size control (−〈α〉γ = −0.0005 ± 0.0002, R2 = 0.06, n = 2107). d Left: plot of θ 〈τ〉 〈α〉, vs. γ 〈τ〉 〈α〉 for the bacteria dataset shown in Fig. 7c. Positive values along both y and x axes correspond to a positive effect on size control via time or growth modulation respectively. Right: plot of θ multiplied by 〈G〉, the average replicative growth 〈G〉=〈log(Vmitosis)/log(Vbirth)〉, vs. γ multiplied by 〈G〉 for HT29-wt cells shown in a and b. e Comparison of datasets for bacteria (data from refs.8, 51,49,52) and yeasts (data from refs.11, 16), plotted as in d. Each point corresponds to a different growth condition (see Supplementary Fig. 8d). f Comparison of datasets for animal cells (our results and data from ref.30.), plotted as in Fig. 8d. a, b, c Dots are single-cell measurements, squares with error bars are median bins with s.d., and black lines show the linear regression performed on the median bins weighted by the number of observations in each bin. d–f The dashed lines indicate the threshold above which time modulation (horizontal line) and growth modulation (vertical line) have a positive effect on size control. Values are given as slope ± standard error. See also Supplementary Figure 8
These three parameters are linked by a balance relation, which describes the fact that the overall size correction results from the combination of timing and growth rate corrections (see also Supplementary Note 1).
$${\lambda } = {\mathrm{\theta }}\left\langle \alpha \right\rangle \left\langle \tau \right\rangle + {\mathrm{\gamma }}\left\langle \alpha \right\rangle \left\langle \tau \right\rangle$$
Each cell line and condition can be characterized by one value for each parameter and thus one point on the graph which shows γ vs. θ (Fig. 7d). Additional (less relevant here) parameters concern the intrinsic stochasticity of cell cycle duration, growth rates and net growth (see Supplementary Information). For eukaryotes where the growth rate 〈α〉 is not easily accessible, the product 〈α〉〈τ〉 was approximated by:
$$\left\langle \alpha \right\rangle \left\langle \tau \right\rangle \approx \left\langle G \right\rangle = \left\langle {{\mathrm{log}}\left( {V_{\mathrm {mitosis}}} \right){\mathrm{/log}}\left( {V_{\mathrm {birth}}} \right)} \right\rangle$$
(Fig. 7d, right). The validity of this normalization was tested with bacteria (Supplementary Fig. 8c and Supplementary Note 1).
Using these dimensionless parameters, it was then possible to compare datasets obtained from different cell types in different conditions and estimate whether they displayed volume homeostasis (λ > 0) with an adder behavior (λ = 0.5) or better (λ = 0). It was also possible to know if homeostasis relied more on time modulation (θ > 0) or growth rate modulation (γ > 0).
Various couplings of growth and time modulations generate an adder
With this framework, all the datasets for both bacteria8,51,52,52 and yeasts11,15 mostly fell around the line of λ = 0.5, indicative of a near-adder behavior (Fig. 7e and Supplementary Fig. 8d). Most mammalian cells also displayed volume homeostasis close to an adder behavior (all points except the Raji cells fell clustered around the line representing λ = 0.5, Fig. 7f), consistent with the plot shown in Fig. 2b. For both mammalian cells and bacteria, no dataset showed a negative time modulation, meaning that time modulation, when it is observed, always contribute to homeostasis. With comparison to yeast and bacteria, positive contribution of growth rate modulations to size control was stronger (γ > 0) and observed more often in mammalian cells. Negative growth rate modulation (larger cells with a faster exponential growth rate than smaller cells at birth), which was observed for some yeasts and bacteria, was also observed in two cases in mammalian cells (for Raji cells and HT29-hgem, Fig. 7f). Our analysis method, by providing a summarized overview of a large dataset comprising various cell types and culture conditions, demonstrated the generality of the phenomenological adder (or near-adder) behavior, and also revealed the diversity of the underlying homeostatic mechanisms with different coupling of growth rate and timing modulation. Such diversity was even observed in experiments on the same cell line depending on the growth conditions (datasets from bacteria) or initial size (results from Roscovitine-induced large HeLa cells).
The current understanding of size homeostasis in mammalian cells derives in large part from indirect evidence, due to experimental limitations. To tackle these limitations, we have developed FXm38,39, a method that tracks the volume of individual mammalian cells over long periods of time, allowing direct measurements of freely growing and dividing cells. We show that the near-adder behavior is commonly observed in a variety of cultured and primary mammalian cells, similarly to yeast and bacteria. We provide direct evidence for a contribution of both growth rate and time modulation in size control and quantify their relative contribution in a general mathematical framework. Future work deciphering the molecular mechanisms of these adaptive modulations is required.
Our results on HeLa and HT29 cells confirm previous findings32 implicating modulation of G1 duration in size control for mammalian cells, with a constraint on a minimal G1 duration above which large cells cycle in a minimal time, independent of initial size (Figs. 4c, d, i, j and 6b). In order to identify the molecular players of G1 size-checkpoint in mammalian cells, methods such as the FXm that enable single live cell size tracking will be a powerful tool to combine with reporters of recently identified key regulators of the G1/S transition36,53,54. S-G2 modulation was also observed in HeLa cells but not HT29 (Fig. 5a–c). This could reflect the existence of an additional size-checkpoint (similar to the 'cryptic' size-checkpoints in yeast55) observed in some cell types and not others, or observed when cells cycle very fast (like HeLa). Alternatively, it could be the sign of an over-arching control on cell size (the mechanistic adder)45, as suggested by the negative correlation between added volume in S-G2 and added volume in G1 for HeLa cells (Fig. 5f).
Our dataset also provides direct evidence in support of a role for growth rate modulations in size homeostasis24,27. In particular, experiments on Roscovitine-induced abnormally large HeLa cells showed that such cells grew on average exponentially (Fig. 6d and Supplementary Fig. 7d,e), did not adapt G1 duration to initial size (Fig. 6b), and yet maintained a size homeostasis behavior (Fig. 6c). This might be achieved through an adaptation of the exponential growth rate to the volume at birth (Fig. 6e). When considering single-cell growth trajectories, we observed that individual cells could display complex growth behaviors, with alternating plateaus and growth phases not clearly correlated with cell cycle stage events (Supplementary Fig. 7b-c). Only modulations that are size-dependent can impact cell size control, while general modulations, such as phase-dependent modulation of growth56 (Supplementary Fig. 7f), do not contribute to size homeostasis. The factors that could modulate growth rate at the single-cell level in a size-dependent manner are as yet unknown. They could involve, as recently hypothesized, limitations of protein synthesis rate in large cells57, nonlinear metabolic scaling with cell size58,59, physical constraints on volume growth via the addition of surface area60, or dynamic changes in cell/substrate adhesion, cell spreading, and cortical tension.
Our unbiased mathematical framework quantifies, for all cell types and all growth conditions, the respective contributions of growth and time modulation to the effective size homeostasis behavior (Fig. 7e, f). This analysis allowed us to compare the size homeostasis behavior of widely differing cells, and revealed global similarities, but also striking differences between mammalian cells and unicellular organisms. The adder behavior has been observed in a variety of unicellular organisms, from bacteria7,8,11 to budding yeast11,45 and we showed that this behavior is also very common in cultured and primary cells (Figs. 2b and 3b). However, the apparent universality of the adder at the phenomenological level may mask a more complex picture, where several regulatory mechanisms acting in parallel or sequentially might be at play42. Our mathematical framework shows that in bacteria, yeasts and animal cells, a variety of coupling between growth rate modulation and cell cycle duration modulation can lead to the same effective size control behavior. Second, within the group of cells we studied, growth rate modulation played a major contribution to size homeostasis in animal cells but less in yeast and bacteria (Fig. 7e, f). Environmentally dictated changes in growth rate are widely regarded as a central parameter for cell size homeostasis in multicellular organisms2,61,62. Thus, we surmise that the flexibility in patterns of growth may have to do with the acquisition of controlled and coordinated growth in tissues, which requires cells to respond quickly and efficiently to numerous simultaneous environmental cues. Understanding the physical parameters that drive animal cell growth in cell culture or in more complex tissue-like environment and combining this with the well characterized growth pathways63 is a challenging and promising research question.
All the cell culture media were supplemented with 10% FBS and 1% penicillin–streptomycin. Media (#31053044, #21041025, #11875093, #61965026), EDTA, trypsin, penicillin–streptomycin, Insulin-Transferrin-Selenium-Sodium Pyruvate (#51300044), and glutamax (#35050061) were purchased from ThermoFisher. Zeocin (#10072492) was purchased from Life Technologies and Puromycin (#BML-GR312-0050) from Enzo life sciences. HeLa, MDCK, HT29 were cultured in DMEM-Glutamax and imaged in a media of the same composition but without phenol red. RPE1 and primary NHDF cells were cultured and imaged using DMEM-F12. Raji cells were cultured and imaged in RPMI-1640 supplemented with Glutamax. NAF cells were cultured and imaged in DMEM, no-phenol red, supplemented with Glutamax and Insulin-Transferrin-Selenium-Sodium Pyruvate. Dextran (#D-22910, #D-22914, #FD10S) and Roscovitine (#R7772-1G) were purchased from Sigma Alrich. The stock solution of dextran was 10 mg mL−1 in PBS, the stock solution of Roscovitine was 50 mM in DMSO.
Cell lines and plasmids
HeLa cells are human cancerous epithelial cells from adenocarcinoma. HeLa expressing hgeminin-GFP (HeLa-hgem) were a kind gift from Buzz Baum's lab (UCL, London, United Kingdom). HeLa Kyoto expressing MyrPalm-mEGFP-H2B-mRFP (HeLa-MP) are a kind gift from Daniel Gerlich's lab (ETH, Zurich, Switzerland). HT29 cells are human cancerous cells coming from colorectal adenocarcinoma. HT29 wild type cells (HT29-wt) were HT29 HTB-38 bought from ATCC. A stable HT29 cell line expressing hgem-mCherry (HT29-hgem) was established using the lentiviral vector mCherry-hGeminin(1/60)/pCSII-EF:64 electroporation was used to transfect the cells, the cells were then selected with zeomycin 200 µg mL−1 and FACS-sorted for mCherry fluorescence. The resulting polyclonal population showed a good homogeneity in fluorescence intensity. MDCK cells are dog epithelial cells from an apparently normal kidney. They are however hyperdiploid with a modal chromosome number ranging from 77 to 80 or 87 to 90 (instead of 78 for this specie). MDCK cells were obtained from Buzz Baum lab (UCL, London, United Kingdom). Similarly to the protocol used for HT29 expressing hgem-mcherry, a stable MDCK cell line expressing MyrPalm-GFP (MDCK-MP) was established by electroporating cells with the plasmid pMyrPalm-mEGFP-IRES_puro2b offered by Daniel Gerlich's lab. Selection was made with Puromycin 2 µg mL−1 prior to FACS sorting. For all the transfected cell lines, antibiotic were removed from the culture media after FACS sorting. Raji cells are human B lymphoblastoid cells coming from a lymphoma. Raji were obtained from Claire Hivroz's lab (Institut Curie, Paris, France). RPE1 cells are human retinal pigment epithelial cells and were a kind gift from Anne Paoletti's lab (Institut Curie, Paris, France). Cell lines were tested for Mycoplasma every 6 months approximately and the tests were always negative.
Extraction and culture of primary cells
NHDFs are primary cells extracted from human abdominal skin and were bought from Biopredic. NAFs were a kind gift from Danjiela Vignevic's lab, (Institut Curie, Paris, France). They are human primary fibroblasts isolated from fresh healthy intestinal tissue of patients with locally advanced rectal cancer. Sampling protocol was approved by the designed ethics committee (CPP, Comité de Protection des Patients) and all patients gave written informed consent. The three types of NAFs used in this paper come from two different patients (for one patient, two samples (NAF-A and NAF-C), extracted at two distinct locations were taken). The protocol for sample collection and preparation is described in refs. 65,66. Briefly, samples were collected after surgical resection in DMEM medium supplemented with 1% Antibiotic-Antimycotic. Tissue was mechanically resected in 1 mm piece, plated on scratched 10 cm Petri dishes and cultured in DMEM supplemented with 10% FBS, 1% Insulin-Transferrin-Selenium (ITS) and 1% Antibiotic-Antimycotic at 37 °C. Medium was changed every 3 days until fibroblasts emerged from the tissue peace. At this time cells were trypsinized and cultured under normal conditions for up to 10 passages.
Microchannel experiments
Microchannels molds were made with classical lithography technics and then replicated in epoxy molds. Microchannels had a 104 µm2 cross-section area (13 µm width by 8 µm height) (Supplementary Fig. 1a). They were crossed perpendicularly by two large distributing channels (5 mm width by 50 µm height). The microchannels chips were replicated in PDMS, plasma-treated, bound to glass-bottom fluorodishes, coated with fibronectin 50 µg mL−1 and incubated over night with the culture media. The large distributing channels were used both to inject the cells and as reservoirs of media. Cells were injected at a concentration of 3.8×106 mL−1, in the upper distributing channel; the dishes were then tilted with the distribution channel up to depose cells at the entry of the microchannels by gravity. The opposing distributing branch contained only media and thus diffused nutrients to the channels. This was indeed important to guarantee enough nutrient stock and good growth conditions throughout the 50 h of the acquisition in this confined design. Cells were then let to migrate in the microchannels over-night and experiments were started the morning after.
Upon mitotic entry, cells round-up and adopt a cylinder shape because of the confinement. The contours of the cells were visualized by imaging of the protein MyrPalm-GFP to label cell membrane. Volume was calculated by measuring the length (ℓ) of the cell and multiplying it by the channel cross section area (CS): V ≈ ℓ.CS. (Supplementary Fig. 1a). For the analysis, mitosis was defined as the first time-point where the cell rounds-up and displays a cylinder shape and birth was the last time point after cytokinesis where the cell is still in the shape of a cylinder (Fig. 2c). In the channels, cells cycled slower (their average cell cycle duration was, for HeLa-MP: mean = 24.9 h and for MDCK-MP: mean = 19.5 h, by comparison, HeLa-hgem have an average cell cycle duration of 16.2 h in a culture dish (Supplementary Fig. 1b)). They also showed indirect evidence of linear growth (large and small cells added the same amount of volume in the same amount of time, (Supplementary Fig. 2a, c), a behavior reminiscent of what has been observed in another study using microchannels32. However, we checked that volume at birth and average growth speed were constant through time in the experiment (Supplementary Fig. 1g), thus meaning that these experiments were performed in stationary conditions and constitute a valid dataset for our study.
Volume measurement with FXm
The FXm method was initially described in ref.39 and a detailed protocol is available in ref.38. In these two previous works, we provide a number of controls to show that this method enables accurate measurement of volume independently of cell shape (e.g., cells that were measured before and after detaching from the substrate had the same volume as they became round). Volume measurement can be affected if cells uptake the fluorescent probe (thus leading to an underestimation of the volume). To check that this was not the case, we plotted volume at birth though time in the experiment for all the cell types studied (Supplementary Fig. 1g and Supplementary Fig. 6a). We could confirm that, for all cell types except HeLa cells, volume at birth was steady throughout the experiment. For HeLa cell, we could see some uptake of the fluorescent probe (see Supplementary Movie 1) but the decrease in volume from the beginning to the end of the experiment (after 40 h) is below 10% and thus does not impact the analysis we perform in our work.
Except for Raji experiments, the design of the volume measurement chamber (Fig. 1a) included two side reservoirs that diffused nutrients to cells in the middle of the chamber through microchannels. Side reservoirs were 400 µm high and diffusion to the observation part was achieved through a grid of channels (w = 100 µm, l = 300 µm, and h = 5 µm). The height of the chambers was ranging from 20 to 24 µm (depending on the chambers) for HT29 and HeLa cells, 15.5 or 18.2 µm for the Raji cells and 18.4 µm for RPE1, NAFs and NHDF cells.
A detailed protocol for the FXm experiment is available in ref.38. Briefly, the day before the experiment, chambers were replicated in PDMS (crosslinker:PDMS, 1:10). To prevent dextran leakage outside the chambers, the height of the inlets was risen by sticking 3–4 mm high PDMS cubes on top of each inlet, then 2 mm diameter punches were made for every inlets. Chambers were then irreversibly bounded to 35 mm diameter glass-bottom fluorodishes by plasma treatment. Finally, they were coated with fibronectin 50 µg mL−1 (all cell types except RPE1) or 10 µg mL−1 (RPE1), rinsed and then incubated overnight with the appropriate phenol-red-free media. During the acquisition, the chambers were covered with media to prevent desiccation through the PDMS and subsequent changes of the osmolarity of the media in the chamber. To prevent potential sources of variability in the growth speed or doubling rate caused by different proliferative states in the population, cells were cultured in controlled conditions prior to experiments and then seeded at constant concentration two days before starting the experiment (1×105 cm−2 for HT29 cells and 1.9×104 cm−2 for HeLa). Cells were detached using trypLE (Thermofisher #12605036) (all cell types except HeLa) for 5 min or less or EDTA (Life Technologies #15040–033) (HeLa) for 15–20 min, to avoid cell aggregates and optimize adhesion time to the glass-bottom, fibronectin-coated chamber. Cells were injected in the central part of the chamber (Fig. 1a) at a concentration ranging from 1.5 to 2×105 cells per mL in order to obtain the appropriate density in the chambers using a narrow 10 µL pipet tip (HT29, HeLa, RPE1) or a 2 mL syringe (NAFs, NHDF). For adherent cells (all cell types except Raji cells), 4 h after seeding, media was changed with equilibrated media containing 1 mg mL−1 of 10 kDa Dextran. Raji cells were injected together with the Dextran. The dextran used was 10 kDa Dextran coupled either to Alexa-645 (HeLa-hgem experiments), Alexa-488 (HT29-wt, HT29-hgem, Raji experiments) or FITC (RPE1, NHDF, NAFs experiments). Imaging started 2–4 h after changing the media to give time for media to equilibrate in the chamber and avoid possible inhomogeneity of dextran just after injection.
Controls for FXm experiments on cultured cell lines
We checked that cell cycle time was similar inside and outside the measurement device (Supplementary Fig. 1b). All the cancerous adherent cell types (HeLa-hgem, HT29-wt, and HT29-hgem) showed a slightly higher duration of cell cycle outside the device compared to inside the device. For RPE1 cells, the difference in cell cycle duration was not statistically significant although they seem to be cycling slightly faster outside than inside the FXm device. Suspended Raji cells on the contrary showed a slightly higher cell cycle duration inside the FXm device. There can be multiple reasons for this (i.e., higher concentration of proliferative signals secreted by the cells in the FXm chamber than in the large volume of the culture dish; larger access to oxygen in microfluidic devices, made of a gas permeable elastomer, PDMS, than at the bottom of a Petri dish, where cells can easily find themselves in hypoxic conditions). Overall, the difference in average cell cycle duration outside or inside the device was significant but small and this control shows that cells cycle on normal times in the FXm device.
Controls for FXm experiments on primary cells
Primary cells, which are samples directly coming from human patients and are known to progressively stop dividing in culture are overall more heterogeneous in culture than immortal cell lines. The comparison of the coefficient of variation of cell cycle duration or volume at birth of our four datasets on primary cells with that of four immortal cell lines (RPE1, HT29-wt, Raji, and L1210 from ref.30.) illustrates the higher variability observed in primary cells populations (Supplementary Fig. 3a-b). Moreover, the change of culture environment, from the cell culture dish to the FXm chamber caused a change in the way cells grew, with a low overall replicative growth: the ratio 〈Vmitosis/Vbirth〉 was about 1.5–1.6 (Fig. 3a). This is lower than the values we report for immortal cell lines (Fig. 3a, gray area, Supplementary Fig. 2d).
To check that this decrease in volume was not due to an uptake of the fluorescence probe we use for the FXm method, we compared the images and results with that of HeLa-hgem cells. For HeLa-hgem cells, the uptake of dextran was visible by eye, producing clear dots in the cells (Supplementary Fig. 3d) and led to a decrease in volume throughout the experiment that was below 10% (Supplementary Fig. 1g) and a ratio 〈Vmitosis/Vbirth〉 equal to 1.8 (Supplementary Fig. 2d). In primary cells, we could not identify by eye any uptake of the fluorescent probe (Supplementary Fig. 3e), yet the ratio 〈Vmitosis/Vbirth〉 was lower than that of HeLa (from 1.5 to 1.6 depending on the experiments). We also checked that the average growth speed was constant throughout the experiments (Supplementary Fig. 3f), indicating that the decrease in size was not caused by a progressive decrease of average growth speed during the course of the experiment (induced for example by a repetitive illumination or a progressive depletion of nutrients or any other time-dependent parameter of the set-up). Unfortunately, it is not possible with our device to perform longer experiments in order to check whether primary cells reached a new steady state of size after a few generations.
Altogether, these controls allow us to eliminate a number of potential experimental bias that could have explained the decrease in size in these primary cells. Because of these differences, we did not include primary cells to our final mathematical framework in Fig. 7. However, the separate analysis of their homeostatic behavior reveals that they display an adder (NAFs) or near-adder (NHDF) (Fig. 3b) that involves very little modulation of cell cycle timing (Supplementary Fig. 3g-j).
Choice of key time points for the FXm analysis
For the analysis of the relationships between volume at birth, volume at mitosis and volume at G1/S and the duration or volume gained between two of these time points, cells were manually tracked. During mitosis, an abrupt and reversible increase of volume has been described previously by us and others39,67 (Supplementary Fig. 1d). To make sure that we measured volume at mitosis and volume at birth outside from this mitotic volume overshoot, we defined mitosis as the point occurring 60 min prior to cytokinesis and birth as the point occurring 40 min after cytokinesis (Fig. 1b and Supplementary Fig. 1d). To check that volume at mitosis was measured before the mitotic volume overshoot, we compared the volume measured 100 min and 60 min before cytokinesis and verified that they were not significantly different (pairwise t test comparing the means: p = 0.800) (Supplementary Fig. 1e). On average, the volume of the mother cell 60 min before mitosis is slightly higher than the sum of the volume of the two daughter cells at birth (mean: m = 3200 µm3 and m = 2900 µm3, respectively). The potential overestimation of volume at mitosis however remains below 10% of the average volume at mitosis and thus will not have an effect on the correlations studied. For the measurement of volume at birth 40 min after cytokinesis, we checked that segmentation of the daughter cells close to each other did not introduce mistakes in volume measurement. To do so, we compared the sum of the volumes of the two daughter cells measured separately with the value obtained when measuring the two cells at once (Supplementary Fig. 1e). These two measurements were not significantly different (pairwise t test comparing the means p = 0.826).
G1/S was identified as the first time-point where hgeminin-GFP (HeLa) or hgeminin-mcherry (HT29) was observed (Fig. 4a). This point was visually assessed by looking at the movies and we checked that this method was correct compared with an assessment from the fluorescence expression profile (Supplementary Fig. 4a). We compared 10 curves and show here the most imprecise evaluation (Supplementary Fig. 4a left), the average type of error observed (Supplementary Fig. 4a middle) and the best evaluation (Supplementary Fig. 4a right). This empirical check shows that on average the error was very small.
Roscovitine experiment
For the Roscovitine experiments, HeLa cells were seeded in six-well plates at 1.9×104 (control) and 8.3×104 (treated) cells per cm2 52 h before the experiment. Four hours later (48 h before the experiment), when cells were spread, media was changed to 2 mL ± 20 µM Roscovitine. Roscovitine stock solution was 50 mM in DMSO.
Phenol red-free media was used for FXm experiments. Acquisitions were performed on a Ti inverted (Nikon) or Axio Ob-server microscope (Carl Zeiss) or DMi8 inverted microscope (Leica) at 37 °C with 5% CO2 atmosphere, a ×10 dry objective (NA 0.30 phase) for FXm experiments or a ×20 dry objective (NA 0.45 phase) for microchannels experiments. Images were acquired using MetaMorph (Molecular Devices) or Axio Vision (Carl Zeiss) software. The excitation source was systematically a LED for FXm experiments to obtain the best possible homogeneity of field illumination (Lumencor or Zeiss Colibri); or a mercury arc lamp for some of the microchannel experiments. Images were acquired with a CoolSnap HQ2 camera (Photometrics) or an ORCA-Flash4.0 camera (Hammamatsu).
For time-lapse experiment, images were acquired every 5 min (microchannel experiments), 10 min (FXm measurements: fluorescence-exclusion channel and phase channel for HeLa, HT29, and Raji cells), 15 min (FXm measurements: fluorescence-exclusion channel and phase channel for RPE1, NAFs, and NHDF cells) and 30 min (fluorescent geminin channel) for up to 50 h in order to obtain 1–2 full cell cycles per lineage. One of the crucial parameter to preserve a good cycling of the cells in the FXm chambers throughout the 50 h of experiment is to reduce the power of the fluorescence lamp to the maximum. A useful landmark to adapt the parameters on different microscopes, was to set the power of the lamp in order to obtain around 213 gray levels with excitation times of about 300–400 ms.
For FXm experiments, image analysis was performed using a home-made Matlab program described in ref.39. The growth curves were analyzed with an updated version of this program written in collaboration with the company QuantaCell38. Briefly, fluorescent signal was calibrated for every time points using the fluorescence intensity of the pillars and around the cell of interest to obtain the linear relationship between height and fluorescence. After background cleaning, the fluorescence intensity was integrated for the whole cell and its surroundings to obtain the cell volume.
For the microchannels experiments, image analysis was performed on ImageJ.
Data filtering and analysis
For all the data on animal cells (ours and from ref.30.), only clear outliers that were higher or lower than the mean ± 3×s.d. (standard deviation) were removed. This corresponded on average to 0 to 5 points maximum per dataset (each dataset being n > 87). These outliers were removed for visual purposes (scale of the plot adapted to the range of the data) and analytical robustness.
For the bacteria and yeast data obtained from previous studies8,11,15,51,52, a filter based on the IQR (interquartile range) was performed: cells for which log(Vbirth) and log (Vmitosis) were higher or lower than 1.5×IQR ± median of log(Vbirth) and log(Vmitosis), respectively were removed.
The growth curves were obtained from automated tracking of the movies and analyzed as follows. First, all the tracks were visualized to identify the phases in the cell cycle (birth was automatically detected because the tracks split when the newborn cells separated, mitotic volume overshoot indicating the end of the cell cycle was visually assessed from the volume growth curve (see Fig. 1c) and G1/S transition was visually assessed as the transition point in the nuclear-hgeminin fluorescence expression curve as shown in Supplementary Fig. 4a). Both complete cell cycle trajectories and incomplete trajectories that were longer than 5 h and contained at least one identified cell cycle event (birth, G1/S or mitosis) were kept. Second, clear outliers caused by errors of segmentation were removed using a sliding filter that removed a point if it were too far from the median of the local distribution of measures (on a window of 11 frames). This filter was good enough to remove only clear outliers (Supplementary Fig. 7a, left). Third, for the instantaneous growth speed measurement, the volume curves were smoothed by performing sliding average on windows of 7 frames (70 min). Then the growth speed at each point was the slope of a robust linear fit performed on windows of 9 frames centered on this point (Supplementary Fig. 7a, middle).
All the figures and statistical analysis were performed in R. Packages used were: "robust", "robustbase","ggplot2","grid","gridExtra","xtable","stringr","RColorBrewer".
For the boxplots, the upper and lower hinges correspond to the first and third quartiles, the upper and lower whiskers extend from the hinge to the highest (lowest) value within 1.5*IQR (Inter Quantile Range) of the hinge. Data beyond the whiskers are shown as outliers.
For the plots where a linear relationship was tested, a linear fit on the median bins, weighted by the number of observed variables in each bins was performed. The results of this fit is always indicated with the slope coefficient (a) ± its standard error, the p-value of the slope coefficient (p) and the coefficient of determination (R2). For all the plots except the ones analyzing growth speed as a function of time or size (Fig. 6d, e and Supplementary Fig. 7d-f), the bins are median bins along the x axis of the plot, and the bars represent the standard deviation. Equally spaced bins were defined along the x axis and bins that contained less than a minimum number of single-cell events were removed. The bin number (binn) and the minimum number of events per bin (minn) was adapted to the size of the datasets as follows. For animal cells, the size of the datasets ranged from 80 to 300 oservations, 6≤binn≤8 and 2≤binn≤8. For bacteria or yeast datasets, binn and minn depended on the number n of observation in the dataset: n<100, binn= 8, minn= 8; 100≤n<1000, binn= 10, minn= 15; 1000≤n<5000, binn= 13, minn= 60; n>5000, binn= 15, minn= 150. For the plots testing the relationship between growth speed and volume or time (Fig. 6d, e and Supplementary Fig. 7d-f), the bins are average bins and bins that contained measurements on less than five different cells were removed to avoid low-sampling effects.
The authors declare that all data supporting the findings of this study are available within the article and its supplementary information Files or from the corresponding author upon reasonable request. The Matlab home-made software developed for volume measurement is available upon request.
This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication.
Ginzberg, M. B., Kafri, R. & Kirschner, M. On being the right (cell) size. Science (80-.). 348, 1245075–1245075 (2015).
Lloyd, A. C. The regulation of cell size. Cell 154, 1194–205 (2013).
Article PubMed CAS Google Scholar
Fantes, P. A. Control of cell size and cycle time in Schizosaccharomyces pombe. J. Cell. Sci. 24, 51–67 (1977).
Pan, K. Z., Saunders, T. E., Flor-Parra, I., Howard, M. & Chang, F. Cortical regulation of cell size by a sizer cdr2p. eLife 2014, e02040 (2014).
Amir, A. Cell Size Regulation in Bacteria. Phys. Rev. Lett. 112, 208102 (2014).
ADS Article CAS Google Scholar
Voorn, W. J. & Koppes, L. J. H. Skew or third moment of bacterial generation times. Arch. Microbiol. 169, 43–51 (1998).
Campos, M. et al. A Constant Size Extension Drives Bacterial Cell Size Homeostasis. Cell 159, 1433–1446 (2014).
Article PubMed PubMed Central CAS Google Scholar
Taheri-Araghi, S. et al. Cell-Size Control and Homeostasis in Bacteria. Curr. Biol. 25, 385–391 (2015).
Yu, F. B. et al. Long-term microfluidic tracking of coccoid cyanobacterial cells reveals robust control of division timing. BMC Biol. 15, 11 (2017).
Deforet, M., Van Ditmarsch, D. & Xavier, J. Cell-Size Homeostasis and the Incremental Rule in a Bacterial Pathogen. Biophys. J. 109, 521–528 (2015).
ADS Article PubMed PubMed Central CAS Google Scholar
Soifer, I., Robert, L. & Amir, A. Single-cell analysis of growth in budding yeast and bacteria reveals a common size regulation strategy. Curr. Biol. 26, 356–361 (2016).
Conlon, I. & Raff, M. Differences in the way a mammalian cell and yeast cells coordinate cell growth and cell-cycle progression. J. Biol. 2, 7 (2003).
Osella, M., Nugent, E. & Cosentino Lagomarsino, M. Concerted control of Escherichia coli cell division. Proc. Natl. Acad. Sci. USA. 111, 3431–5 (2014).
Iyer-Biswas, S. et al. Scaling aws governing stochastic growth and division of single bacterial cells. Proc. Natl. Acad. Sci. 111, 15912–15917 (2014).
Nobs, J.-B. & Maerkl, S. J. Long-term single cell analysis of S. pombe on a microfluidic microchemostat array. PLoS One 9, e93466 (2014).
Adiciptaningrum, A. et al. Stochasticity and homeostasis in the E. coli replication and division cycle. Sci. Rep. 5, 18261 (2015).
Jorgensen, P. & Tyers, M. How cells coordinate growth and division. Curr. Biol. 14, R1014–27 (2004).
Schmoller, K. M. The phenomenology of cell size control. Curr. Opin. Cell Biol. 49, 53–58 (2017).
Ho, P. & Amir, A. Simultaneous Regulation of Cell Size and Chromosome Replication in Bacteria. Front. Microbiol. 6, 1–10 (2015).
Wang, P. et al. Robust growth of escherichia coli. Curr. Biol. 20, 1099–1103 (2010).
Dolznig, H., Grebien, F., Sauer, T., Beug, H. & Müllner, E. W. Evidence for a size-sensing mechanism in animal cells. Nat. Cell Biol. 6, 899–905 (2004).
Echave, P., Conlon, I. J. & Lloyd, A. C. Cell Size Regulation in Mammalian Cells. Cell Cycle 6, 218–224 (2007).
Killander, D. & Zetterberg, a A quantitative cytochemical investigation of the relationship between cell mass and initiation of DNA synthesis in mouse fibroblasts in vitro. Exp. Cell Res. 40, 12–20 (1965).
Kafri, R. et al. Dynamics extracted from fixed cells reveal feedback linking cell growth to cell cycle. Nature 494, 480–483 (2013).
Ginzberg, M. B. et al. Cell size sensing in animal cells coordinates anabolic growth rates and cell cycle progression to maintain cell size uniformity. eLife 7, pii: e26957 (2018).
Sung, Y. et al. Size homeostasis in adherent cells studied by synthetic phase microscopy. Proc. Natl. Acad. Sci. USA. 110, 16687–92 (2013).
ADS Article PubMed PubMed Central Google Scholar
Tzur, A., Kafri, R., Lebleu, V. S., Lahav, G. & Kirschner, M. W. Cell growth and size homeostasis in proliferating animal cells. Science 325, 167–71 (2009).
Park, K. et al. Measurement of adherent cell mass and growth. Proc. Natl. Acad. Sci. USA. 107, 20691–6 (2010).
Mir, M. et al. Optical measurement of cycle-dependent cell growth. Proc. Natl. Acad. Sci. USA. 108, 13124–9 (2011).
Son, S. et al. Direct observation of mammalian cell growth and size regulation. Nat. Methods 9, 910–2 (2012).
Grover, W. H. et al. Measuring single-cell density. Proc. Natl. Acad. Sci. USA. 108, 10992–6 (2011).
Varsano, G. et al. Probing Mammalian Cell Size Homeostasis by Article Probing Mammalian Cell Size Homeostasis by Channel-Assisted Cell Reshaping. CellReports 20, 397–410 (2017).
Schmoller, K., Turner, J. J., Kõivomägi, M. & Skotheim, J. M. Dilution of the cell cycle inhibitor Whi5 controls budding yeast cell size. Nature 526, 268–272 (2015).
Sompayrac, L. & Maaloe, O. Autorepressor Model for Control of DNA Replication. Nat. New Biol. 241, 133–5 (1973).
Zielke, N. et al. Control of Drosophila endocycles by E2F and CRL4CDT2. Nature 480, 123–127 (2011).
Liu, S. et al. Size uniformity of animal cells is actively maintained by a p38 MAPK-dependent regulation of G1-length. eLife 7, e26947 (2018).
Cadart, C., Zlotek-Zlotkiewicz, E., Le Berre, M., Piel, M. & Matthews, H. K. H. K. Exploring the function of cell shape and size during mitosis. Dev. Cell. 29, 159–169 (2014).
Cadart, C. et al. Fluorescence eXclusion Measurement of volume in live cells. Methods Cell Biol. 139, 103–120 (2017).
Zlotek-Zlotkiewicz, E., Monnier, S., Cappello, G., Le Berre, M. & Piel, M. Optical volume and mass measurements show that mammalian cells swell during mitosis. J. Cell. Biol. 211, 765–774 (2015).
Model, M. A. Methods for cell volume measurement. Cytom. Part A 93, (2017).
Lancaster, O. M. et al. Mitotic rounding alters cell geometry to ensure efficient bipolar spindle formation. Dev. Cell. 25, 270–83 (2013).
Osella, M., Tans, S. J. & Cosentino Lagomarsino, M. Step by Step, Cell by Cell: Quantification of the Bacterial Cell Cycle. Trends Microbiol. 25, 250–256 (2017).
Hartwell, L. & Unger, M. Unequal Division in the Control Division and Its Implications for the Control of Cell Division. J. Cell. Biol. 75, 422–435 (1977).
Johnston, G. C., Prongle, J. R. & Hartwell, L. H. Coordination of growth with cell division in the yeast Saccharomyces cerevisiae. Exp. Cell Res. 105, 79–98 (1977).
Chandler-Brown, D., Schmoller, K. M., Winetraub, Y. & Skotheim, J. M. The Adder Phenomenon Emerges from Independent Control of Pre- and Post- Start Phases of the Budding Yeast Cell Cycle. Curr. Biol. 27, 2774–2783.e3 (2017).
Article PubMed CAS PubMed Central Google Scholar
Talia, S. Di, Skotheim, J. M., Bean, J. M., Siggia, E. D. & Cross, F. R. The effects of molecular noise and size control on variability in the budding yeast cell cycle. Nature 448, 947 (2007).
ADS Article PubMed CAS Google Scholar
Sakaue-Sawano, A. et al. Visualizing Spatiotemporal Dynamics of Multicellular Cell-Cycle Progression. Cell 132, 487–498 (2008).
Meijer, L. & Raymond, E. Roscovitine and Other Purines as Kinase Inhibitors . From Starfish Oocytes to Clinical Trials. Acc. Chem. Res. 36, 417–425 (2003).
Grilli, J., Osella, M., Kennard, A. S. & Lagomarsino, M. C. Relevant parameters in models of cell division control. Phys. Rev. E - Stat. Nonlinear, Soft Matter Phys. 032411, (2017).
Wallden, M., Fange, D., Gregorsson Lundius, E., Baltekin, Ö. & Elf, J. The synchronization of replication and division cycles in individual E. coli cells (in press). Cell 166, 729–739 (2016).
Kennard, A. S. et al. Individuality and universality in the growth-division laws of single E. Coli cells. Phys. Rev. E - Stat. Nonlinear, Soft Matter Phys. 93, 1–18 (2016).
Kiviet, D. J. et al. Stochasticity of metabolism and growth at the single-cell level. Nature 514, 376–379 (2014).
Cappell, S. D., Chung, M., Jaimovich, A., Spencer, S. L. & Meyer, T. Irreversible APCCdh1 Inactivation Underlies the Point of No Return for Cell-Cycle Entry. Cell 166, 167–180 (2016).
Barr, A. R., Heldt, F. S., Zhang, T., Bakal, C. & Novák, B. A Dynamical Framework for the All-or-None G1/S Transition. Cell Syst. 2, 27–37 (2016).
Turner, J. J., Ewald, J. C. & Skotheim, J. M. Cell size control in yeast. Curr. Biol. 22, (2012).
Goranov, A. I. & Amon, A. Growth and division-not a one-way road. Curr. Opin. Cell Biol. 22, 795–800 (2010).
Kafri, M., Metzl-Raz, E., Jonas, F. & Barkai, N. Rethinking cell growth models. FEMS. Yeast. Res. 16, 1–13 (2016).
Miettinen, T. P. & Bjorklund, M. Cellular Allometry of Mitochondrial Functionality Establishes the Optimal Cell Size. Dev. Cell. 39, 370–382 (2016).
Miettinen, T. P. & Björklund, M. Mitochondrial Function and Cell Size: An Allometric Relationship. Trends. Cell Biol. 27, 393–402 (2017).
Glazier, D. Metabolic Scaling in Complex Living Systems. Systems 2, 451–540 (2014).
Roberts, Sa & Lloyd, A. C. Aspects of cell growth control illustrated by the Schwann cell. Curr. Opin. Cell Biol. 24, 852–857 (2012).
Grewal, S. S. & Edgar, Ba Controlling cell division in yeast and animals: does size matter? J. Biol. 2, 5 (2003).
Laplante, M. & Sabatini, D. M. MTOR signaling in growth control and disease. Cell 149, 274–293 (2012).
Sakaue-Sawano, A. et al. Visualizing developmentally programmed endoreplication in mammals using ubiquitin oscillators. Development 140, 4624–4632 (2013).
Glentis, A. et al. Cancer-associated fibroblasts induce metalloprotease-independent cancer cell invasion of the basement membrane. Nat. Commun. 8, 1–13 (2017).
Castelló-Cros, R. & Cukierman, E. in Extracellular Matrix Protocols, (eds Streuli, C. & Grant, M.) 275–305 (Humana Press, 2009).
Son, S. et al. Resonant microchannel volume and mass measurements show that suspended cells swell during mitosis. J. Cell. Biol. 211, 757–763 (2015).
We thank Jorge Barbazan for helping with the experiments on NAFs, the Gerlich lab for sharing the HeLa-MP cell line, Helen K. Matthews, Nunu Mchedlichvili, and Ewa Zlotek-Zlotkiewicz, members of the Piel lab and the Perez lab for scientific and technical advices, Camille Blakeley and Charlotte Pirot for preliminary works as undergrad students, Tom Wyatt, Youmna Attieh, and Giuliana Victoria for help on revising the manuscript, Isabel Brito for advices on the statistical analysis, Laurence Bataille for help with the Raji cells, Lucie Sengmanivong and the imaging platform from the Institut Curie PICT-IBiSA, the UMR 168 clean room facility and the IPGG platform. We also acknowledge Jan Skotheim for critical reading of the manuscript, Sungmin Son, Scott Manalis, Ariel Amir and Sander Tans for sharing and discussing data on yeast and bacteria. CC acknowledges support from the Fondation pour la Recherche Médicale (FDT20160435078) and the Ligue Nationale contre le Cancer for funding. M.C.L. acknowledges support from the International Human Frontier Science Program Organization, grant RGY0070/2014. B.B. acknowledges Cancer Research UK program grant for support: C1529/A17343. This work was supported by a LABEX IPGG grant to R.A., by an ERC consolidator grant (311205 PROMICO) to M.P., by an ANR grant to M.P. (ANR-14-CE11-0009-03, CellSize), by the Institut Pierre Gilles de Gennes (Equipement d'Excellence, " Investissements d'Avenir", program ANR-10-EQPX-34).
Emmanuel Terriac
Present address: INM-Leibniz Institute for New Materials, Campus D2 2, 66123, Saarbrücken, Germany
Institut Curie, PSL Research University, CNRS, UMR 144, F-75005, Paris, France
Clotilde Cadart, Sylvain Monnier, Pablo J. Sáez, Nishit Srivastava, Rafaele Attia, Emmanuel Terriac & Matthieu Piel
Institut Pierre-Gilles de Gennes, PSL Research University, F-75005, Paris, France
Clotilde Cadart, Pablo J. Sáez, Nishit Srivastava, Rafaele Attia & Matthieu Piel
Univ. Lyon, Université Claude Bernard Lyon 1, CNRS, Institut Lumière Matière, F-69622, Villeurbanne, France
Sylvain Monnier
Department of Ecology and Evolution, University of Chicago, 1101 E 57th Street, Chicago, IL, 60637, USA
Jacopo Grilli
Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM, 87501, USA
MRC Laboratory for Molecular Cell Biology, UCL, London, WC1E 6BT, UK
Buzz Baum
Institute of Physics of Living Systems, UCL, London, WC1E 6BT, UK
Sorbonne Universités, Université Pierre et Marie Curie, Paris, F-75005, France
Marco Cosentino-Lagomarsino
CNRS, UMR 7238 Computational and Quantitative Biology, Paris, F-75005, France
FIRC Institute of Molecular Oncology (IFOM), Milan, 20139, Italy
Clotilde Cadart
Pablo J. Sáez
Nishit Srivastava
Rafaele Attia
Matthieu Piel
C.C. and S.M. conducted the experiments, S.M. optimized and designed the FXm chambers, N.S. and P.S. performed some of the data analysis, E.T. and R.A. designed, produced, and characterized the molds for the chambers, M.C.-L. and J.G. developed the theoretical framework, B.B. helped conceive of project, helped supervise early part, helped with text, C.C. conducted the analysis, M.C.-L. helped with data analysis, C.C. and M.C.-L. helped with the manuscript preparation, C.C. and M.P. designed the experiment, M.P. wrote the paper and supervised the work.
Correspondence to Marco Cosentino-Lagomarsino or Matthieu Piel.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Description of Additional Supplementary Files
Supplementary Movie 1
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Cadart, C., Monnier, S., Grilli, J. et al. Size control in mammalian cells involves modulation of both growth rate and cell cycle duration. Nat Commun 9, 3275 (2018). https://doi.org/10.1038/s41467-018-05393-0
DOI: https://doi.org/10.1038/s41467-018-05393-0
Tracking bacterial lineages in complex and dynamic environments with applications for growth control and persistence
Somenath Bakshi
Emanuele Leoncini
Johan Paulsson
Nature Microbiology (2021)
A 'dynamic adder model' for cell size homeostasis in Dictyostelium cells
Masahito Tanaka
Toshiko Kitanishi-Yumura
Shigehiko Yumura
Artificially decreasing cortical tension generates aneuploidy in mouse oocytes
Isma Bennabi
Flora Crozet
Marie-Emilie Terret
Modeling homeostasis mechanisms that set the target cell size
Cesar A. Vargas-Garcia
Mikael Björklund
Abhyudai Singh
Cell size sensing—a one-dimensional solution for a three-dimensional problem?
Ida Rishal
Mike Fainzilber
BMC Biology (2019)
Reviews & Analysis
Editors' Highlights
Nature Communications (Nat Commun) ISSN 2041-1723 (online) | CommonCrawl |
Hydrothermal 15N15N abundances constrain the origins of mantle nitrogen
J. Labidi ORCID: orcid.org/0000-0002-5656-226X1 nAff11,
P. H. Barry ORCID: orcid.org/0000-0002-6960-15552,
D. V. Bekaert ORCID: orcid.org/0000-0002-1062-62213,
M. W. Broadley3,
B. Marty ORCID: orcid.org/0000-0001-7936-15193,
T. Giunta4,
O. Warr4,
B. Sherwood Lollar4,
T. P. Fischer5,
G. Avice6,
A. Caracausi7,
C. J. Ballentine8,
S. A. Halldórsson9,
A. Stefánsson9,
M. D. Kurz2,
I. E. Kohl1,10 &
E. D. Young1
Nitrogen is the main constituent of the Earth's atmosphere, but its provenance in the Earth's mantle remains uncertain. The relative contribution of primordial nitrogen inherited during the Earth's accretion versus that subducted from the Earth's surface is unclear1,2,3,4,5,6. Here we show that the mantle may have retained remnants of such primordial nitrogen. We use the rare 15N15N isotopologue of N2 as a new tracer of air contamination in volcanic gas effusions. By constraining air contamination in gases from Iceland, Eifel (Germany) and Yellowstone (USA), we derive estimates of mantle δ15N (the fractional difference in 15N/14N from air), N2/36Ar and N2/3He. Our results show that negative δ15N values observed in gases, previously regarded as indicating a mantle origin for nitrogen7,8,9,10, in fact represent dominantly air-derived N2 that experienced 15N/14N fractionation in hydrothermal systems. Using two-component mixing models to correct for this effect, the 15N15N data allow extrapolations that characterize mantle endmember δ15N, N2/36Ar and N2/3He values. We show that the Eifel region has slightly increased δ15N and N2/36Ar values relative to estimates for the convective mantle provided by mid-ocean-ridge basalts11, consistent with subducted nitrogen being added to the mantle source. In contrast, we find that whereas the Yellowstone plume has δ15N values substantially greater than that of the convective mantle, resembling surface components12,13,14,15, its N2/36Ar and N2/3He ratios are indistinguishable from those of the convective mantle. This observation raises the possibility that the plume hosts a primordial component. We provide a test of the subduction hypothesis with a two-box model, describing the evolution of mantle and surface nitrogen through geological time. We show that the effect of subduction on the deep nitrogen cycle may be less important than has been suggested by previous investigations. We propose instead that high mid-ocean-ridge basalt and plume δ15N values may both be dominantly primordial features.
Buy or subscribe
This is a preview of subscription content, access via your institution
Open Access articles citing this article.
Noble Gases and Stable Isotopes Track the Origin and Early Evolution of the Venus Atmosphere
Guillaume Avice
, Rita Parai
… Mihail P. Petkov
Space Science Reviews Open Access 26 October 2022
Nitrogen isotope evidence for Earth's heterogeneous accretion of volatiles
Lanlan Shi
, Wenhua Lu
… Yuan Li
Nature Communications Open Access 15 August 2022
The solubility of N2 in silicate melts and nitrogen partitioning between upper mantle minerals and basalt
Hans Keppler
, Laura Cialdella
… Michael Wiedenbeck
Contributions to Mineralogy and Petrology Open Access 13 August 2022
Change institution
Subscribe to Nature+
Get immediate online access to Nature and 55 other Nature journal
only $3.90 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Get time limited or full article access on ReadCube.
Additional access options:
Learn about institutional subscriptions
Fig. 1: The nitrogen isotopic composition of volcanic gases and volatile-rich MORBs.
Fig. 2: The relationship between Δ30 and N2/3He ratios in volcanic gases.
Fig. 3: The relationship between Δ30 and argon isotopes in volcanic gases.
Fig. 4: The evolution of δ15N and nitrogen abundances in the convective mantle and at the Earth's surface as a function of time.
Nitrogen isotopologue and noble gas data are archived on EarthChem at https://doi.org/10.1594/IEDA/111481. Source data for Figs. 1–3 are provided with the paper.
Javoy, M. The birth of the Earth's atmosphere: the behaviour and fate of its major elements. Chem. Geol. 147, 11–25 (1998).
Article ADS CAS Google Scholar
Dauphas, N. & Marty, B. Heavy nitrogen in carbonatites of the Kola Peninsula: a possible signature of the deep mantle. Science 286, 2488–2490 (1999).
Marty, B. & Zimmermann, L. Volatiles (He, C, N, Ar) in mid-ocean ridge basalts: assesment of shallow-level fractionation and characterization of source composition. Geochim. Cosmochim. Acta 63, 3619–3633 (1999).
Marty, B. & Dauphas, N. The nitrogen record of crust–mantle interaction and mantle convection from Archean to Present. Earth Planet. Sci. Lett. 206, 397–410 (2003).
Palot, M., Cartigny, P., Harris, J. W., Kaminsky, F. V. & Stachel, T. Evidence for deep mantle convection and primordial heterogeneity from nitrogen and carbon stable isotopes in diamond. Earth Planet. Sci. Lett. 357–358, 179–193 (2012).
Barry, P. H. & Hilton, D. R. Release of subducted sedimentary nitrogen throughout Earth's mantle. Geochem. Perspect. Lett. 2, 148–159 (2016).
Marty, B. et al. Gas geochemistry of geothermal fluids, the Hengill area, southwest rift zone of Iceland. Chem. Geol. 91, 207–225 (1991).
Fischer, T. P. et al. Subduction and recycling of nitrogen along the Central American margin. Science 297, 1154–1157 (2002).
Fischer, T. et al. Upper-mantle volatile chemistry at Oldoinyo Lengai volcano and the origin of carbonatites. Nature 459, 77–80 (2009).
Bräuer, K., Kämpf, H., Niedermann, S. & Strauch, G. Indications for the existence of different magmatic reservoirs beneath the Eifel area (Germany): a multi-isotope (C, N, He, Ne, Ar) approach. Chem. Geol. 356, 193–208 (2013).
Javoy, M. & Pineau, F. The volatiles record of a "popping" rock from the Mid-Atlantic Ridge at 14°N: chemical and isotopic composition of gas trapped in the vesicles. Earth Planet. Sci. Lett. 107, 598–611 (1991).
Bebout, G. E. & Fogel, M. L. Nitrogen-isotope compositions of metasedimentary rocks in the Catalina Schist, California: implications for metamorphic devolatilization history. Geochim. Cosmochim. Acta 56, 2839–2849 (1992).
Busigny, V., Cartigny, P. & Philippot, P. Nitrogen isotopes in ophiolitic metagabbros: a re-evaluation of modern nitrogen fluxes in subduction zones and implication for the early Earth atmosphere. Geochim. Cosmochim. Acta 75, 7502–7521 (2011).
Busigny, V., Cartigny, P., Philippot, P., Ader, M. & Javoy, M. Massive recycling of nitrogen and other fluid-mobile elements (K, Rb, Cs, H) in a cold slab environment: evidence from HP to UHP oceanic metasediments of the Schistes Lustrés nappe (western Alps, Europe). Earth Planet. Sci. Lett. 215, 27–42 (2003).
Bebout, G. E., Agard, P., Kobayashi, K., Moriguti, T. & Nakamura, E. Devolatilization history and trace element mobility in deeply subducted sedimentary rocks: evidence from Western Alps HP/UHP suites. Chem. Geol. 342, 1–20 (2013).
Grady, M. & Wright, I. Elemental and isotopic abundances of carbon and nitrogen in meteorites. Space Sci. Rev. 106, 231–248 (2003).
Abernethy, F. A. J. et al. Stable isotope analysis of carbon and nitrogen in angrites. Meteorit. Planet. Sci. 48, 1590–1606 (2013).We include issue numbers in journal references only when each issue begins at page 1; the issue number has therefore been removed from ref. 17
ADS CAS Google Scholar
Cartigny, P., Palot, M., Thomassot, E. & Harris, J. W. Diamond formation: a stable isotope perspective. Annu. Rev. Earth Planet. Sci. 42, 699–732 (2014).
Pearson, V. K., Sephton, M. A., Franchi, I. A., Gibson, J. M. & Gilmour, I. Carbon and nitrogen in carbonaceous chondrites: elemental abundances and stable isotopic compositions. Meteorit. Planet. Sci. 41, 1899–1918 (2006).
Young, E. D. et al. Near-equilibrium isotope fractionation during planetesimal evaporation. Icarus 323, 1–15 (2019).
Li, Y., Marty, B., Shcheka, S., Zimmermann, L. & Keppler, H. Nitrogen isotope fractionation during terrestrial core-mantle separation. Geochem. Perspect. Lett. 2, 138–147 (2016).
Dalou, C. et al. Redox control on nitrogen isotope fractionation during planetary core formation. Proc. Natl Acad. Sci. USA 116, 14485–14494 (2019).
Allègre, C. J. & Turcotte, D. L. Implications of a two-component marble-cake mantle. Nature 323, 123–127 (1986).
Thomazo, C. & Papineau, D. Biogeochemical cycling of nitrogen on the early Earth. Elements 9, 345–351 (2013).
Yeung, L. Y. et al. Extreme enrichment in atmospheric 15N15N. Sci. Adv. 3, eaao6741 (2017).
Halldórsson, S. A., Hilton, D. R., Barry, P. H., Füri, E. & Grönvold, K. Recycling of crustal material by the Iceland mantle plume: new evidence from nitrogen elemental and isotope systematics of subglacial basalts. Geochim. Cosmochim. Acta 176, 206–226 (2016); corrigendum 186, 360–364 (2016).
Lee, H., Sharp, Z. D. & Fischer, T. P. Kinetic nitrogen isotope fractionation between air and dissolved N2 in water: implications for hydrothermal systems. Geochem. J. 49, 571–573 (2015).
Ballentine, C. J., Burgess, R. & Marty, B. Tracing fluid origin, transport and interaction in the crust. Rev. Mineral. Geochem. 47, 539–614 (2002).
Warr, O., Rochelle, C. A., Masters, A. & Ballentine, C. J. Determining noble gas partitioning within a CO2–H2O system at elevated temperatures and pressures. Geochim. Cosmochim. Acta 159, 112–125 (2015).
Chiodini, G. et al. Insights from fumarole gas geochemistry on the origin of hydrothermal fluids on the Yellowstone Plateau. Geochim. Cosmochim. Acta 89, 265–278 (2012).
Bekaert, D. V. B., Broadley, M. W., Caracausi, A. & Marty, B. Novel insights into the degassing history of the Earth's mantle from high precision noble gas analysis of magmatic gas. Earth Planet. Sci. Lett. 525, 115766–115778 (2019).
Sano, Y., Takahata, N., Nishio, Y., Fischer, T. P. & Williams, S. N. Volcanic flux of nitrogen from the Earth. Chem. Geol. 171, 263–271 (2001).
Avice, G. et al. Evolution of atmospheric xenon and other noble gases inferred from Archean to Paleoproterozoic rocks. Geochim. Cosmochim. Acta 232, 82–100 (2018).
Giggenbach, W. & Goguel, R. Collection and Analysis of Geothermal and Volcanic Water and Gas Discharges Report No. CD 2401 (Chemistry Division, DSIR, 1989).
Sherwood Lollar, B., Westgate, T., Ward, J., Slater, G. & Lacrampe-Couloume, G. Abiogenic formation of alkanes in the Earth's crust as a minor source for global hydrocarbon reservoirs. Nature 416, 522–524 (2002).
Young, E. D., Rumble, D., III, Freedman, P. & Mills, M. A large-radius high-mass-resolution multiple-collector isotope ratio mass spectrometer for analysis of rare isotopologues of O2, N2, CH4 and other gases. Int. J. Mass Spectrom. 401, 1–10 (2016).
Barry, P. et al. Noble gases solubility models of hydrocarbon charge mechanism in the Sleipner Vest gas field. Geochim. Cosmochim. Acta 194, 291–309 (2016).
Fischer, T. P. et al. Temporal variations in fumarole gas chemistry at Poás volcano, Costa Rica. J. Volcanol. Geotherm. Res. 294, 56–70 (2015).
Rizzo, A. L. et al. Kolumbo submarine volcano (Greece): an active window into the Aegean subduction system. Sci. Rep. 6, 28013 (2016).
Ward, J. A. et al. Microbial hydrocarbon gases in the Witwatersrand Basin, South Africa: implications for the deep biosphere. Geochim. Cosmochim. Acta 68, 3239–3250 (2004).
Sherwood Lollar, B. et al. Unravelling abiogenic and biogenic sources of methane in the Earth's deep subsurface. Chem. Geol. 226, 328–339 (2006).
Sano, Y. et al. Origin of methane-rich natural gas at the West Pacific convergent plate boundary. Sci. Rep. 7, 15646 (2017).
Article ADS PubMed PubMed Central CAS Google Scholar
Sarda, P. & Graham, D. Mid-ocean ridge popping rocks: implications for degassing at ridge crests. Earth Planet. Sci. Lett. 97, 268–289 (1990).
Moreira, M., Kunz, J. & Allegre, C. Rare gas systematics in popping rock: isotopic and elemental compositions in the upper mantle. Science 279, 1178–1181 (1998).
Middleton, J. L., Langmuir, C. H., Mukhopadhyay, S., McManus, J. F. & Mitrovica, J. X. Hydrothermal iron flux variability following rapid sea level changes. Geophys. Res. Lett. 43, 3848–3856 (2016).
Jones, M. et al. New constraints on mantle carbon from Mid-Atlantic Ridge popping rocks. Earth Planet. Sci. Lett. 511, 67–75 (2019).
Péron, S. et al. Noble gas systematics in new popping rocks from the Mid-Atlantic Ridge (14° N): evidence for small-scale upper mantle heterogeneities. Earth Planet. Sci. Lett. 519, 70–82 (2019).
Cartigny, P., Pineau, F., Aubaud, C. & Javoy, M. Towards a consistent mantle carbon flux estimate: Insights from volatile systematics (H2O/Ce, δD, CO2/Nb) in the North Atlantic mantle (14°N and 34°N). Earth Planet. Sci. Lett. 265, 672–685 (2008).
Cartigny, P., Jendrzejewski, N., Pineau, F., Petit, E. & Javoy, M. Volatile (C, N, Ar) variability in MORB and the respective roles of mantle source heterogeneity and degassing: the case of the Southwest Indian Ridge. Earth Planet. Sci. Lett. 194, 241–257 (2001).
Füri, E. et al. Apparent decoupling of the He and Ne isotope systematics of the Icelandic mantle: the role of He depletion, melt mixing, degassing fractionation and air interaction. Geochim. Cosmochim. Acta 74, 3307–3332 (2010).
Mukhopadhyay, S. Early differentiation and volatile accretion recorded in deep-mantle neon and xenon. Nature 486, 101–104 (2012).
Bräuer, K., Kämpf, H., Niedermann, S., Strauch, G. & Weise, S. M. Evidence for a nitrogen flux directly derived from the European subcontinental mantle in the Western Eger Rift, central Europe. Geochim. Cosmochim. Acta 68, 4935–4947 (2004).
Libourel, G., Marty, B. & Humbert, F. Nitrogen solubility in basaltic melt. Part I. effect of oxygen fugacity. Geochim. Cosmochim. Acta 67, 4123–4135 (2003).
Broadley, M. W., Ballentine, C. J., Chavrit, D., Dallai, L. & Burgess, R. Sedimentary halogens and noble gases within Western Antarctic xenoliths: implications of extensive volatile recycling to the sub continental lithospheric mantle. Geochim. Cosmochim. Acta 176, 139–156 (2016).
Matsumoto, T., Chen, Y. & Matsuda, J.-i. Concomitant occurrence of primordial and recycled noble gases in the Earth's mantle. Earth Planet. Sci. Lett. 185, 35–47 (2001).
Buikin, A. et al. Noble gas isotopes suggest deep mantle plume source of late Cenozoic mafic alkaline volcanism in Europe. Earth Planet. Sci. Lett. 230, 143–162 (2005).
Gautheron, C., Moreira, M. & Allègre, C. He, Ne and Ar composition of the European lithospheric mantle. Chem. Geol. 217, 97–112 (2005).
Caracausi, A., Avice, G., Burnard, P. G., Füri, E. & Marty, B. Chondritic xenon in the Earth's mantle. Nature 533, 82–85 (2016).
Moreira, M., Rouchon, V., Muller, E. & Noirez, S. The xenon isotopic signature of the mantle beneath Massif Central. Geochem. Perspect. Lett. 6, 28–32 (2018).
Sobolev, A. V. et al. The amount of recycled crust in sources of mantle-derived melts. Science 316, 412–417 (2007).
Moreira, M. A., Dosso, L. & Ondréas, H. Helium isotopes on the Pacific-Antarctic ridge (52.5°–41.5° S). Geophys. Res. Lett. 35, L10306 (2008).
Day, J. M. & Hilton, D. R. Origin of 3He/4He ratios in HIMU-type basalts constrained from Canary Island lavas. Earth Planet. Sci. Lett. 305, 226–234 (2011).
Lowenstern, J. B., Evans, W. C., Bergfeld, D. & Hunt, A. G. Prodigious degassing of a billion years of accumulated radiogenic helium at Yellowstone. Nature 506, 355–358 (2014).
Warr, O. et al. Tracing ancient hydrogeological fracture network age and compartmentalisation using noble gases. Geochim. Cosmochim. Acta 222, 340–362 (2018).
Holland, G. et al. Deep fracture fluids isolated in the crust since the Precambrian era. Nature 497, 357–360 (2013).
Li, L., Cartigny, P. & Ader, M. Kinetic nitrogen isotope fractionation associated with thermal decomposition of NH3: experimental results and potential applications to trace the origin of N2 in natural gas and hydrothermal systems. Geochim. Cosmochim. Acta 73, 6282–6297 (2009).
Sherwood Lollar, B. et al. Evidence for bacterially generated hydrocarbon gas in Canadian Shield and Fennoscandian Shield rocks. Geochim. Cosmochim. Acta 57, 5073–5085 (1993).
Sherwood Lollar, B. et al. Abiogenic methanogenesis in crystalline rocks. Geochim. Cosmochim. Acta 57, 5087–5097 (1993).
Jean, M. M., Hanan, B. B. & Shervais, J. W. Yellowstone hotspot–continental lithosphere interaction. Earth Planet. Sci. Lett. 389, 119–131 (2014).
Plank, T. & Langmuir, C. H. The chemical composition of subducting sediment and its consequences for the crust and mantle. Chem. Geol. 145, 325–394 (1998).
Class, C. & le Roex, A. P. Ce anomalies in Gough Island lavas—trace element characteristics of a recycled sediment component. Earth Planet. Sci. Lett. 265, 475–486 (2008).
Eisele, J. et al. The role of sediment recycling in EM-1 inferred from Os, Pb, Hf, Nd, Sr isotope and trace element systematics of the Pitcairn hotspot. Earth Planet. Sci. Lett. 196, 197–212 (2002).
Jackson, M. G. et al. The return of subducted continental crust in Samoan lavas. Nature 448, 684–687 (2007).
Devey, C. W. et al. Active submarine volcanism on the Society hotspot swell (West Pacific): a geochemical study. J. Geophys. Res. Solid Earth 95, 5049–5066 (1990).
Dodson, A., Kennedy, B. M. & DePaolo, D. J. Helium and neon isotopes in the Imnaha Basalt, Columbia River Basalt Group: evidence for a Yellowstone plume source. Earth Planet. Sci. Lett. 150, 443–451 (1997).
Parai, R. & Mukhopadhyay, S. How large is the subducted water flux? New constraints on mantle regassing rates. Earth Planet. Sci. Lett. 317–318, 396–406 (2012).
Parai, R. & Mukhopadhyay, S. Xenon isotopic constraints on the history of volatile recycling into the mantle. Nature 560, 223–227 (2018); publisher correction 563, E28 (2018).
Johnson, B. & Goldblatt, C. The nitrogen budget of Earth. Earth Sci. Rev. 148, 150–173 (2015); corrigendum 165, 377–378 (2017)
Ader, M. et al. Interpretation of the nitrogen isotopic composition of Precambrian sedimentary rocks: assumptions and perspectives. Chem. Geol. 429, 93–110 (2016).
Goldblatt, C. et al. Nitrogen-enhanced greenhouse warming on early Earth. Nat. Geosci. 2, 891–896 (2009).
Nishizawa, M., Sano, Y., Ueno, Y. & Maruyama, S. Speciation and isotope ratios of nitrogen in fluid inclusions from seafloor hydrothermal deposits at ∼3.5 Ga. Earth Planet. Sci. Lett. 254, 332–344 (2007).
Marty, B., Zimmermann, L., Pujol, M., Burgess, R. & Philippot, P. Nitrogen isotopic composition and density of the Archean atmosphere. Science 342, 101–104 (2013).
Allègre, C. J., Staudacher, T. & Sarda, P. Rare gas systematics: formation of the atmosphere, evolution and structure of the Earth's mantle. Earth Planet. Sci. Lett. 81, 127–150 (1987).
Avice, G., Marty, B. & Burgess, R. The origin and degassing history of the Earth's atmosphere revealed by Archean xenon. Nat. Commun. 8, 15455 (2017).
Pinti, D. L., Hashizume, K. & Matsuda, J.-i. Nitrogen and argon signatures in 3.8 to 2.8 Ga metasediments: clues on the chemical state of the Archean ocean and the deep biosphere. Geochim. Cosmochim. Acta 65, 2301–2315 (2001).
Marty, B. & Humbert, F. Nitrogen and argon isotopes in oceanic basalts. Earth Planet. Sci. Lett. 152, 101–112 (1997).
Li, L. & Bebout, G. E. Carbon and nitrogen geochemistry of sediments in the Central American convergent margin: insights regarding subduction input fluxes, diagenesis, and paleoproductivity. J. Geophys. Res. Solid Earth 110, B11202 (2005).
Schultz, L. & Franke, L. Helium, neon, and argon in meteorites: a data collection. Meteorit. Planet. Sci. 39, 1889–1890 (2004).
Kerridge, J. F. Carbon, hydrogen, and nitrogen in carbonaceous chondrites: abundances and isotopic compositions in bulk samples. Geochim. Cosmochim. Acta 49, 1707–1714 (1985).
Marty, B. et al. Origins of volatile elements (H, C, N, noble gases) on Earth and Mars in light of recent results from the ROSETTA cometary mission. Earth Planet. Sci. Lett. 441, 91–102 (2016).
This study was supported by the Deep Carbon Observatory through Sloan Foundation grant numbers G-2018-11346 and G-2017-9815 to E.D.Y. The Deep Carbon Observatory also supported field trips via grant numbers G-2016-7206 and G-2017-9696 to P. H.B. We thank S. Mukhopadyay for providing a sample of popping rock; K. Farley for lending equipment; and J. Dottin, M. Bonifacie, V. Busigny, P. Cartigny and A. Shahar for helpful discussions.
J. Labidi
Present address: Université de Paris, Institut de physique du globe de Paris, CNRS, Paris, France
Department of Earth, Planetary, and Space Sciences, UCLA, Los Angeles, CA, USA
J. Labidi, I. E. Kohl & E. D. Young
Marine Chemistry and Geochemistry Department, Woods Hole Oceanographic Institution, Woods Hole, MA, USA
P. H. Barry & M. D. Kurz
Centre de Recherches Pétrographiques et Géochimiques, CNRS, Université de Lorraine, Vandoeuvre les Nancy, France
D. V. Bekaert, M. W. Broadley & B. Marty
Department of Earth Sciences, University of Toronto, Toronto, Ontario, Canada
T. Giunta, O. Warr & B. Sherwood Lollar
Department of Earth and Planetary Sciences, University of New Mexico, Albuquerque, NM, USA
T. P. Fischer
Université de Paris, Institut de physique du globe de Paris, CNRS, Paris, France
G. Avice
Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Palermo, Italy
A. Caracausi
Department of Earth Sciences, University of Oxford, Oxford, UK
C. J. Ballentine
Nordvulk, Institute of Earth Sciences, University of Iceland, Reykjavik, Iceland
S. A. Halldórsson & A. Stefánsson
Thermo Fisher Scientific, Bremen, Germany
I. E. Kohl
P. H. Barry
D. V. Bekaert
M. W. Broadley
B. Marty
T. Giunta
O. Warr
B. Sherwood Lollar
S. A. Halldórsson
A. Stefánsson
M. D. Kurz
E. D. Young
E.D.Y. designed the study. J.L. made the nitrogen isotopologue measurements of all mantle-derived samples and most cratonic samples, interpreted the data and wrote the manuscript with feedback from E.D.Y. E.D.Y. and J.L. constructed the box models. I.E.K. made nitrogen isotopologue measurements of some of the cratonic samples. P.H.B., D.V.B., M.W.B., T.P.F. and A.C. measured noble gas abundances and isotope ratios in mantle-derived samples. O.W. and T.G. measured major element chemistry in cratonic gases. P.H.B., D.V.B., M.W.B., T.P.F. and B.M. conducted field trips and sample collection in Yellowstone, USA. D.V.B., M.W.B. and B.M. conducted a field trip and sample collection in Eifel, Germany. P.H.B., A.S. and S.A.H. conducted a field trip and sample collection in Iceland. B.M. and T.P.F. conducted a field trip and sample collection in East Africa. T.P.F. conducted a field trip and sample collection in Hawaii. B.S.L., O.W. and T.G. conducted multiple field trips and sample collections in the Kidd Creek and Sudbury mines, Canada. G.A. assisted in acquiring data for popping rocks. M.D.K. contributed popping rock samples. All authors contributed to the final manuscript preparation.
Correspondence to J. Labidi or E. D. Young.
Peer review information Nature thanks Rita Parai and Yuji Sano for their contribution to the peer review of this work.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data figures and tables
Extended Data Fig. 1 Probability density plots for N2/36Ar, N2/3He and δ15N based on literature data and our study.
The relative probabilities are scaled so that each probability is visible on the same plot. The probability densities for δ15N are taken from the mean and standard deviation of the reported measured values in the cases of Yellowstone and Eifel. The δ15N MORB3,49,86 and metasediment12,14,15,87 data were compiled from the literature. The probability densities for molecular ratios were calculated by taking the ratios of Monte Carlo draws for numerator and denominator and propagating nominal 20% errors assigned to each molecular concentration. Literature data for chondritic N2/36Ar were obtained using N and 36Ar concentrations in individual chondrites19,88,89. The dataset includes all major types of carbonaceous, enstatite and ordinary chondrites. No systematic difference could be observed between chondrite groups. Using this global dataset, we find that chondritic estimates used earlier3,90 for N2/36Ar cannot be replicated. N2/36Ar and N2/3He estimates for metasediments are from Sano et al.32, assuming a normal distribution for the uncertainties. The convective mantle N2/3He and N2/36Ar data are from ref. 11 and references cited in Extended Data Fig. 5.
Extended Data Fig. 2 3He/4He of Icelandic gases plotted against their 4He/20Ne ratios normalized to air.
Literature data50 illustrate a three-component mixing with air, the convective mantle and the plume mantle. The convective mantle endmember is characterized by 3He/4He of ∼8 RA and a relatively high 4He/20Ne relative to air44. The plume component is characterized by primordial 3He/4He values of up to ∼30 RA and a 4He/20Ne value lower than the convective mantle50,51. On this plot, our 11 samples have compositions with clear contributions from the three endmembers.
Extended Data Fig. 3 N2/He and N2/3He ratios versus δ15N in gases from Iceland, Yellowstone, Ayrolaf, Eifel and Hawaii.
Top, values for N2/He versus δ15N for air, the convective mantle and cratonic gases compared with the samples from this study are shown. This is a similar plot to that in Fischer et al.8 except that cratonic gases are shown for the first time, to our knowledge. The extremely low N2/He ratio for cratonic gases derived here results from substantial accumulation of 4He over geological times. See the Supplementary Discussion for definitions of the cratonic gases based on samples from the Canadian Shield (data in Supplementary Table 2). In this space, data are usually interpreted as representing ternary mixtures. However, this plot fails to account for the processes occurring in hydrothermal systems, whereby the extremely low δ15N values are from isotopically fractionated N2 degassing from geothermal waters, not mixing with mantle components (see main text). Bottom, we show the values for N2/3He versus δ15N for air, the convective mantle and cratonic gases compared with samples from this study. This is a similar plot to that in Sano et al.32 except that cratonic gases are defined (see the Supplementary Information). The samples with low N2/3He values relative to the cratonic endmember can be assumed to receive negligible cratonic nitrogen (see the main text and Supplementary Discussion on Yellowstone). The mantle gases are taken from ref. 3.
Extended Data Fig. 4 84Kr/36Ar and 132Xe/36Ar ratios versus δ15N in gases from hydrothermal systems with near-atmospheric values.
84Kr/36Ar, 132Xe/36Ar and δ15N values are shown for Iceland and Yellowstone samples for which we have heavy noble gas data and Δ30 values of 16‰ and higher. In the two plots, the data define a negative trend, implying that nitrogen loss causing δ15N variations occurs together with preferential Kr (top) and Xe (bottom) losses relative to argon. This is in contrast with predictions based on solubilities obtained in ideal conditions, where both Kr and Xe are expected to be more soluble than Ar29. We suggest that this represents degassing of air-saturated water under extreme temperature and pressure conditions, where gas solubilities deviate considerably from behaviour governed by Henry's Law29. Here, the data would require Kr (and Xe) to become more insoluble than Ar.
Extended Data Fig. 5 N2/36Ar ratios versus 40Ar/36Ar ratios in basalts and rocks from the Kola plume.
Data are from the literature2,3,4,86 and illustrate mixing between mantle gases and air, most probably as the result of the introduction of air into rock cracks during eruption or sample handling. The highest 40Ar/36Ar ratio recorded in basalt crushing experiments with simultaneous N2/36Ar measurements is 42,366 ± 9,713. At this value, the corresponding N2/36Ar ratio is \({4.2}_{-1.5\,}^{+2.0}\times {10}^{6}\), which was assigned to the convective mantle86. The convective mantle is more likely to have a 40Ar/36Ar ratio of 25,000 ± 2,000 (ref. 44). At this 40Ar/36Ar ratio, the corresponding N2/36Ar value becomes \({2.0}_{-1.2\,}^{+1.0}\times {10}^{6}\). At a 40Ar/36Ar value of 5,000 (refs. 2,4), the plume N2/36Ar value would be lower, at \({0.4}_{-0.2\,}^{+0.2}\times {10}^{6}\). However, at the 40Ar/36Ar value of 10,000 suggested by recent studies51, we obtain N2/36Ar = \({0.7}_{-0.3\,}^{+0.5}\times {10}^{6}\) for the plume according to the correlation between N2/36Ar and 40Ar/36Ar.
Extended Data Fig. 6 Mass balance applied to account for Eifel and Yellowstone, in terms of δ15N, N2/36Ar and N2/3He.
δ15N values of Eifel and Yellowstone are shown, as derived from Fig. 1. N2/36Ar and N2/3He values of Eifel and Yellowstone are also shown, as derived from Figs. 2, 3. Recycled components have high elemental ratios, according to ref. 32. These ratios might be even higher if N was less devolatilized than noble gases during slab devolatilization. Note that this would not change our conclusion, since the mixing curve would remain identical. a, In the N2/3He space, the position of Eifel and society basalts can be explained with a simple two-component mixing between the convective mantle and some recycled component. The values for the three Society basalts are taken from ref. 4. The dataset was filtered to show only basalts with the lowest levels of air contamination. We only used the three basalts with 40Ar/36Ar > 5,000. For this to work for Yellowstone, anomalously high δ15N would be required (see b). An alternative mixing requires a 3He-rich reservoir to be postulated with a low N2/3He ratio. We illustrate this speculation with a δ15N of –5‰, like that of the convective mantle. However, other δ15N values (typically between −10 and +10‰) would also fit the Yellowstone data. This is because N in the Yellowstone source would mostly be accounted for by the recycled component, not the 3He-rich endmember forced with a low N2/3He ratio b, In the N2/36Ar space, Eifel can be explained by conventional mixing. If such mixing involves the known convective mantle, Yellowstone requires a recycled endmember with a δ15N > 50‰, which is implausible. Again, for a mixing to account for the data, one would have to assign the 3He-rich reservoir with a low N2/36Ar ratio.
Extended Data Fig. 7 The evolution of δ15N and nitrogen abundances in the convective mantle and at the Earth's surface as a function of time.
Various cases with time-dependent solutions are explored. Curves are calculated as for Fig. 4 (main text) using a two-box model described in the Methods and main text. Here, three cases are modelled on the basis of the combination of various temporal variations in volcanic outgassing and subduction fluxes shown in the bottom panel. Modern fluxes are taken from ref. 13. The blue curves (right) are for the surface (air + continental crust), and the red curves (left) are for the convective mantle. As in Fig. 4, the starting composition for the mantle was chosen to have an enstatite chondrite-like δ15N (ref. 6), and various initial N abundances. A critical result of the model is that varying fluxes can easily match the N abundances for the mantle and the surface, as well as the isotope composition of N in the surface. However, similar to the case where constant fluxes are used (Fig. 4), relatively high subduction fluxes pushes the N isotope cycle towards a steady state in which the mantle would have a higher average δ15N value than that of the surface, contrary to the relationship observed today. The model shown as Case 3 provides an acceptable match to the modern observations, where the mantle has a δ15N value of −1‰, after starting at −40‰. However, this predicts that the mantle evolves considerably through time in terms of δ15N. Thus, if correct, this prediction requires all peridotitic diamonds to be formed in roughly the past 500 Myr. However, peridotitic diamonds found in cratonic lithospheres as old as 3.3 Gyr old are dominated by a δ15N mode at −5‰ (ref. 18), seemingly ruling out the Case 3 model.
Supplementary Table 1
Nitrogen and noble gas data for the springs and fumaroles studied. All the nitrogen data were acquired at UCLA. Noble gas data were acquired in various labs (see legend).
Nitrogen and noble gas data for cratonic samples studied here (see supplementary discussion). All the nitrogen isotope data were acquired at UCLA. Noble gas data were acquired at the university of Toronto (see legend).
Nitrogen isotope data for air standards of varying sizes. Data were acquired at UCLA over the course of this study.
Source Data Fig. 1
Labidi, J., Barry, P.H., Bekaert, D.V. et al. Hydrothermal 15N15N abundances constrain the origins of mantle nitrogen. Nature 580, 367–371 (2020). https://doi.org/10.1038/s41586-020-2173-4
Issue Date: 16 April 2020
Wenhua Lu
Yuan Li
Rita Parai
Mihail P. Petkov
Space Science Reviews (2022)
Laura Cialdella
Michael Wiedenbeck
Contributions to Mineralogy and Petrology (2022)
Nitrogen variations in the mantle might have survived since Earth's formation
The NC-CC Isotope Dichotomy: Implications for the Chemical and Isotopic Evolution of the Early Solar System
Katherine R. Bermingham
Evelyn Füri
Bernard Marty
Nature News & Views 15 Apr 2020 | CommonCrawl |
\begin{document}
\title{F-signature function of quotient singularities} \author{Alessio Caminata} \address{Institut de Matem\`{a}tica, Universitat de Barcelona \\ Gran Via de les Corts Catalanes 585, 08007 Barcelona, Spain} \email{[email protected]} \thanks{The first author is supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 701807.}
\author{Alessandro De Stefani} \address{Department of Mathematics, University of Nebraska, 203 Avery Hall, Lincoln, NE 68588} \email{[email protected]}
\begin{abstract}
We study the shape of the F-signature function of a $d$-dimensional quotient singularity $\Bbbk\ps{x_1,\ldots,x_d}^G$, and we show that it is a quasi-polynomial. We prove that the second coefficient is always zero and we describe the other coefficients in terms of invariants of the finite acting group $G\subseteq {\rm Gl}(d,\Bbbk)$. When $G$ is cyclic, we obtain more specific formulas for the coefficients of the quasi-polynomial, which allow us to compute the general form of the function in several examples. \end{abstract}
\maketitle \section{Introduction}
Let $(R,\mathfrak{m},\Bbbk)$ be a commutative complete Noetherian local domain of characteristic $p>0$, and assume that the residue field $\Bbbk = R/\mathfrak{m}$ is perfect. For a positive integer $e$, let $F^e:R \to R$ denote the $e$-th iterate of the Frobenius endomorphism on $R$. The map $F^e$ can be identified with the $R$-module inclusion $R \hookrightarrow R^{1/p^e}$, where $R^{1/p^e}$ is the ring obtained by adjoining $p^e$-th roots of elements in $R$. The main object of study of this article is the {\it F-signature function of $R$}, that is, the function \[ \xymatrixrowsep{1mm} \xymatrixcolsep{1mm} \xymatrix{ FS:& \mathbb{N} \ar[rrr] &&& \mathbb{N} \\ &e \ar[rrr] &&& \frk_R(R^{1/p^e}), } \] where $\frk_R(R^{1/p^e})$ denotes the maximal rank of a free $R$-summand of $R^{1/p^e}$ or, equivalently, the maximal rank of a free $R$-module $P$ for which there is a surjection $R^{1/p^e} \to P\to 0$.
The F-signature function has been introduced by Smith and Van den Bergh, in the context of rings with finite F-representation type \cite{SmithVDB}. Even though this function has several interesting properties, most of the efforts have been devoted to studying its leading term, called the F-signature of $R$, and denoted $\s(R)$ (see Section \ref{Section_background} for more precise definitions). Despite being a coarser invariant, $\s(R)$ already encodes a significant amount of information about the ring and its singularities. For example, $R$ is regular if and only if $\s(R)=1$ \cite{HunekeLeuschke}, and $R$ is strongly F-regular if and only if $\s(R)>0$ \cite{AberbachLeuschke}. However, $\s(R)$ is typically very hard to compute explicitly, and it is known only in a few sporadic cases. Moreover, the techniques that allow to determine $\s(R)$ often do not allow to compute of the whole F-signature function. Therefore, even less is known about $FS(e)$, with a few very special exceptions (for instance, see \cite{Brinkmann}, or \cite[Example 7]{SinghFSignature}).
Another function that can be defined in the same setup is the Hilbert-Kunz function $e \mapsto HK(e) = \ell_R(R/\mathfrak{m}^{[p^e]})$, where $\ell_R$ denotes the length of an $R$-module, and $\mathfrak{m}^{[p^e]}$ is the ideal generated by the elements $r^{p^e}$, for $r \in \mathfrak{m}$. The Hilbert-Kunz function was first investigated by Kunz in \cite{Kunz} and \cite{F-finExc}. In \cite{Monsky}, Monsky showed that $HK(e)=e_{HK}(R)p^{de}+O(p^{(d-1)e})$, where $e_{HK}(R)$ is a positive real number called Hilbert-Kunz multiplicity and $d$ is the Krull dimension of $R$. The main connection with the F-signature function can be best stated when $R$ is a Gorenstein singularity with minimal multiplicity, in which case $HK(e) = \ell_R(R/(x_1,\ldots,x_d))p^{de} - FS(e)$ for all $e\in\mathbb{N}$ \cite{HunekeLeuschke}. Here, $x_1,\ldots,x_d$ denotes any minimal reduction of the maximal ideal $\mathfrak{m}$. In the case when $R$ is Gorenstein and minimal multiplicity, a knowledge of the function $HK(e)$ therefore leads to that of the F-signature function $FS(e)$. As the F-signature function, the Hilbert-Kunz function is also quite mysterious, and known only for very specific classes of rings. Among other results in this direction, see \cite{Brenner}, \cite{Brinkmann}, \cite{Kurano2}, \cite{HanMonsky}, \cite{F-finExc}, \cite{Kurano1}, \cite{MillerSwanson}, \cite{RobinsonSwanson}.
In the effort of understanding the shape of the Hilbert-Kunz function, the question of whether there exists a ``second coefficient'' for $HK(e)$ has caught the attention of several researchers. One says that $HK(e)$ has a second coefficient if there exists $\beta\in\mathbb{R}$ such that $HK(e)=e_{HK}(R)p^{de}+\beta p^{(d-1)e}+O(p^{(d-2)e})$. Huneke, McDermott, and Monsky \cite{HunekeMcDermottMonsky} prove that, if $R$ is excellent, normal, and $F$-finite, then this is the case. Chan and Kurano \cite{Kurano2} prove that the same result holds if one replaces normal with regular in codimension one. Brenner \cite{Brenner} shows that, for standard graded normal domains of dimension two over an algebraically closed field, the second coefficient equals zero. In \cite{Kurano1}, Kurano proves that the same conclusion holds for F-finite $\mathbb{Q}$-Gorenstein local rings with algebraically closed residue field.
For the F-signature function, it is known that $FS(e)=s(R)p^{de}+O(p^{(d-1)e})$ (see \cite{Tucker2012}, \cite{PolTuc}). In their recent work, Polstra and Tucker ask whether a second coefficient for the F-signature function exists as well \cite[Question 7.4]{PolTuc}. We thank Polstra for pointing out to us that this is known to be true for some classes of rings, including rings that are $\mathbb{Q}$-Gorenstein on the punctured spectrum and affine semigroup rings, as a consequence of the existence of a second coefficient for Hilbert-Kunz functions with respect to $\mathfrak{m}$-primary ideals \cite{HunekeMcDermottMonsky}. Using this approach, Brinkmann \cite{Brinkmann} computes the F-signature function of 2-dimensional ADE singularities, and shows that the second coefficient exists, and it is equal to zero. In this article, we prove that the same result holds for the larger class of $d$-dimensional quotient singularities (see Theorem \ref{theoremA}).
Throughout, $\Bbbk$ denotes an algebraically closed field, and $G \subseteq \mathrm{Gl}(d,\Bbbk)$ is a finite group, that acts linearly on $S=\Bbbk\ps{x_1,\ldots,x_d}$. We assume that the characteristic of $\Bbbk$ does not divide $|G|$, and we let $R=S^G$ be the ring of invariants under this action. We say that an element $g$ of $G$ is a $c$-pseudoreflection if, when viewed as an element of $\mathrm{Gl}(d,\Bbbk)$, it has eigenvalue $1$ with multiplicity $c$, and $d-c$ eigenvalues different from $1$. In particular, the only $d$-pseudoreflection is the identity. In what follows, we assume that the group $G$ is small, that is, $G$ contains no $(d-1)$-pseudoreflections.
There is a known connection between the F-signature of $R$ and the acting group $G$: in our assumptions, $\s(R) = \frac{1}{|G|}$ \cite[Theorem 4.2]{WatanabeYoshida}. In fact, even deeper connections can be established for the generalized F-signature of certain modules \cite{HashimotoNakajima}, even in a more general setup \cite{HashimotoSymonds}. We further develop the relation established in \cite{WatanabeYoshida}, giving a description of the F-signature function of $R$ in terms of $c$-pseudoreflections.
\begin{theoremx}[see Theorem \ref{theorem-Fsignaturefunctionquotient} and Proposition \ref{prop-Fsignaturefuncionquotient}] \label{theoremA} Let $\Bbbk$ be an algebraically closed field of positive characteristic $p$, and $G$ be a finite small subgroup of $\mathrm{Gl}(d,\Bbbk)$ such that $p\nmid |G|$. Let $S=\Bbbk\llbracket x_1,\dots,x_d\rrbracket$ be a power series ring, and let $R=S^G$ be the ring of invariants of $S$ under the action of $G$. The F-signature function of $R$ is a quasi-polynomial in $p^e$: \[ FS(e) = \varphi_dp^{de}+\varphi_{d-1}p^{(d-1)e}+\cdots+\varphi_{1}p^e+\varphi_{0}.
\]
For $0 \leq c \leq d$, $\varphi_c=\varphi_{c}(e)$ is a function that takes values in $\mathbb{Q}$, is bounded, and periodic of period at most $|G|-1$. Moreover: \begin{enumerate} \item $\varphi_c$ is identically zero if and only if $G$ does not contain any $c$-pseudoreflections.
\item If $p^e \equiv 1$ modulo $|G|$, then $\varphi_c(e) = \frac{|G_c|}{|G|}$. \end{enumerate}
In particular, we have that $\varphi_d(e)=\frac{1}{|G|}$, and $\varphi_{d-1}(e)=0$ for all $e\in\mathbb{N}$. \end{theoremx} We remark that, when $G$ is Abelian, the fact that $FS(e)$ is a quasi-polynomial with rational coefficients can be deduced from \cite{Bruns} and \cite{VonKorff}, since in this case $R$ is toric.
As quotient singularities have finite F-representation type \cite{SmithVDB}, our methods actually yield more general formulas for the multiplicities $\mult(M_\alpha,R^{1/p^e})$, where the modules $M_\alpha$ run over the irreducible $R$-modules that appear in a direct sum decomposition of $R^{1/p^e}$, for $e\in\mathbb{N}$. The multiplicity functions include the F-signature function, since $M_0=R$, and thus $FS(e) = \mult(M_0,R^{1/p^e})$. In analogy with $FS(e)$, the aforementioned generalized F-signature of a module $M_\alpha$ is the leading coefficient $\varphi_d^{(\alpha)}$ of the quasi-polynomial $\mult(M_\alpha,R^{1/p^e})$. In this sense, Theorem \ref{theorem-Fsignaturefunctionquotient} generalizes the main result of \cite{HashimotoNakajima}.
In the second part of the article, we focus on the case when the group $G$ is cyclic of order $n$. Viewing a generator $g \in G$ as an element of $\mathrm{Gl}(d,\Bbbk)$, one can assume that $g$ is represented by a diagonal matrix, with $n$-th roots of unity on the diagonal. We can then associate to $g$ a $d$-uple $(t_1,\ldots,t_d)$ that records the multiplicative order of the elements on the diagonal. For every $J \subseteq \{1,\ldots,d\}$, we set $g_J$ to be the greatest common divisor of $n$, together with the integers $\{t_j \ : \ j \in J\}$. For instance, $g_{\{1,\ldots,d\}} = \gcd(t_1,\ldots,t_d,n)$, while $g_{\{1\}} = \gcd(t_1,n)$.
Our second main result is a formula to explicitly compute the F-signature function of $R$ in terms of the integers $g_J$, and functions $e \mapsto \theta_J(e)$ which count the number of solutions of certain congruences (see Notation \ref{notation_psi} for more details). Let $\Gamma_i$ be the set of subsets of $\{1,2,\ldots,d\}$ of cardinality $i$, and set $\psi_i = \sum_{J \in \Gamma_i} g_J\theta_J$. The functions $\psi_i = \psi_i(e)$ are also bounded and periodic, of period at most $|G|-1$.
\begin{theoremx}[see Theorems \ref{theorem-Fsignaturecyclic} and \ref{theorem-peggiodelcasomonomiale}] \label{theoremB} In the setup of Theorem \ref{theoremA}, assume further that $G$ is cyclic of order $n$. For all $e\in\mathbb{N}$, write $p^e=kn+r_e$, where $0 < r_e < n$. With the notation introduced above, the functions $\varphi_c$ can then be expressed as \[ \displaystyle \varphi_c(e) = \frac{1}{n} \left[\sum_{i=c}^d (-1)^{i-c}{i \choose c}\psi_i(e)r_e^{i-c}\right]. \] \end{theoremx} As for Theorem \ref{theoremA}, also the formulas of Theorem \ref{theoremB} can be generalized to similar formulas for the functions $\varphi_c^{(\alpha)}(e)$, which are the coefficients of the multiplicity functions $\mult(M_\alpha,R^{1/p^e})$ (see Theorem \ref{theorem-Fsignaturecyclic}).
As a direct consequence of Theorem \ref{theoremB}, we obtain an explicit description of the F-signature function of Veronese rings up to a bounded periodic function $\theta_\emptyset$, defined in Notation \ref{notation_psi}. Recall that the (complete) $d$-dimensional Veronese ring of order $n$ over a field $\Bbbk$ is the ring $R=\Bbbk\ps{x_1,\ldots,x_d}^G$, where $G=\mathbb{Z}/(n)$, and a generator $g \in G$ is identified with the matrix ${\rm diag}(\lambda,\ldots,\lambda) \in {\rm Gl}(d,\Bbbk)$, where $\lambda$ is a primitive $n$-th root of unity in $\Bbbk$. Alternatively, $R$ can be viewed as the completion at the irrelevant maximal ideal of the $\Bbbk$-subalgebra of $\Bbbk[x_1,\ldots,x_d]$ generated by the monomials of degree $n$ in the variables $x_1,\ldots,x_d$. \begin{corollaryx}[see Corollary \ref{coroll_gcd1}] Let $R$ be a $d$-dimensional Veronese ring of order $n$ over an algebraically closed field of characteristic $p>0$. For $e\in\mathbb{N}$, write $p^e=kn + r_e$. The F-signature function of $R$ is \[ \displaystyle FS(e) = \frac{p^{de}-r_e^d}{n} + \theta_\emptyset, \] where $\theta_\emptyset$ is the number of integral $d$-uples $(a_1,\ldots,a_d)$, contained inside the $d$-dimensional cube $[0,r_e-1]^d$, that satisfy $a_1+\ldots + a_d \equiv 0$ modulo $n$. In particular, if $r_e=1$, then \[ \displaystyle FS(e) = \frac{p^{de}-1}{n} + 1. \] \end{corollaryx}
This paper is structured as follows: in Section \ref{Section_background}, we recall the main definitions and results concerning the F-signature function and Auslander's correspondence, that we use extensively throughout the article. In Section \ref{Section_quotient_sing}, we study the F-signature function of quotient singularities, and prove Theorem \ref{theoremA}. In Section \ref{section:cyclicquotient} we focus on the cyclic case to obtain Theorem \ref{theoremB}, and deduce a formula for Veronese rings. Finally, in Section \ref{section:examples} we provide several examples, to explicitly illustrate how Theorem \ref{theoremA} and Theorem \ref{theoremB} allow to compute the F-signature function of some specific quotient singularities.
\section{Background} \label{Section_background} \subsection{F-signature function} Let $R$ be a commutative Noetherian ring of prime characteristic $p>0$. For a positive integer $e$, let $F^e:R \to R$ denote the $e$-th iterate of the Frobenius endomorphism on $R$, that is, the map that raises every element of $R$ to its $p^e$-th power. Given a finitely generated $R$-module $M$, we denote by $^e\!M$ the module $M$, whose $R$-module structure is pulled back via $F^e$. More explicitly, for $^e\!m_1,^e\!m_2 \in \!^e\!M$ and $r \in R$ we have \[ \displaystyle ^e\!m_1+\!^e\!m_2 = \!^e\!(m_1+m_2) \ \ \mbox{ and } \ \ r \cdot \!^e\!m_1 = \!^e\!(r^{p^e}m_1). \] When the ring $R$ is reduced, the Frobenius endomorphism $F^e:R\rightarrow\ \! R$ can be identified with the natural inclusion $R\hookrightarrow R^{1/p^e}$, where $R^{1/p^e}$ is the ring obtained by adding $p^e$-th roots of elements in $R$. In particular, $^e\!R$ can be identified with $R^{1/p^e}$.
Throughout, we assume that $(R,\mathfrak{m},\Bbbk)$ is a complete local domain with perfect residue field. Let $K$ be the fraction field of $R$. By the rank of a finitely generated $R$-module $M$, we mean the dimension of the $K$-vector space $M \otimes_R K$. We let $\frk_R(R^{1/p^e})$ denote the maximal rank of a free $R$-module $P$ for which there is a surjection $R^{1/p^e} \to P \to 0$.
We now introduce the main object of study of this article.
\begin{Definition}[Smith-Van den Bergh, Huneke-Leuschke] \label{Defn_FSig} Let $(R,\mathfrak{m},\Bbbk)$ be a complete local domain with perfect residue field. The {\it F-signature function of $R$} is defined as \[ \xymatrixrowsep{1mm} \xymatrixcolsep{1mm} \xymatrix{ FS:& \mathbb{N} \ar[rrr] &&& \mathbb{N} \\ &e \ar[rrr] &&& \frk_R(R^{1/p^e}). } \] \end{Definition}
The F-signature function has been introduced and first studied by Smith and Van den Bergh, with main focus on rings with finite F-representation type \cite{SmithVDB}. See the end of the section for a more precise definition. Successively, in \cite{HunekeLeuschke}, Huneke and Leuschke focused on an asymptotic normalized version of this function: they defined the F-signature of $R$ as the limit $\s(R) = \lim_{e \to \infty} \frac{FS(e)}{p^{de}}$, where $d$ is the Krull dimension of $R$. It is easy to see that $0 \leq \s(R) \leq 1$ always holds, but the convergence of such a limit is far from trivial. The existence of the F-signature in full generality was a major open problem, until Tucker gave a proof in \cite{Tucker2012}. In joint work with Polstra and Yao, the second author generalized the definition of F-signature to a more general setup, where the ring does not need to be local \cite{DSPY}.
\begin{Remark} In Definition \ref{Defn_FSig}, the assumption that $R$ is a complete domain and $\Bbbk$ is perfect can be greatly weakened. However, the type of rings we will investigate in this article are of this form. Therefore, we do not provide the most general definition here. \end{Remark}
\par Let $(R,\mathfrak{m},\Bbbk)$ be a complete Noetherian local ring with perfect residue field. The category of finitely generated $R$-modules satisfies the Krull-Remak-Schmidt property. It follows that every finitely generated $R$-module can be uniquely decomposed (up to isomorphism) as a direct sum of indecomposable finitely generated $R$-modules. In our running assumptions, $R$ is F-finite. This means that, for each $e\in\mathbb{N}$, the module $R^{1/p^e}$ is finitely generated, hence a direct sum of finitely generated indecomposable $R$-modules. We say that $R$ has \emph{finite F-representation type} (FFRT for short) if there exists a finite set $\mathcal{N}$ of indecomposable $R$-modules such that for every $e\in\mathbb{N}$ the $R$-module $R^{1/p^e}$ is isomorphic to a direct sum of elements of $\mathcal{N}$. In other words, if $R$ has FFRT and $\mathcal{N}=\{M_0=R,M_1,\dots,M_r\}$, then for all $e\in\mathbb{N}$ we can write \[ R^{1/p^e}\cong M_0^{c_{0,e}}\oplus M_1^{c_{1,e}}\cdots\oplus M_r^{c_{r,e}} \] for some uniquely determined integers $c_{0,e},\ldots,c_{r,e}$. \begin{Notation} For $\alpha\in\{0,\dots,r\}$, we denote $\mult(M_{\alpha},R^{1/p^e})=c_{\alpha,e}$ and we call it \emph{the multiplicity of $M_{\alpha}$ inside $R^{1/p^e}$}. In particular for $\alpha=0$, we have $M_0=R$ and the function $\mult(R,R^{1/p^e})=\frk_R(R^{1/p^e})=FS(e)$ is the F-signature function of $R$. \end{Notation}
\par The notion of FFRT and the functions $e\mapsto\mult(M_{\alpha},R^{1/p^e})$ were introduced by Smith and Van den Bergh \cite{SmithVDB}. They proved that if $R$ is strongly F-regular then the limit \[ \lim_{e \to \infty} \frac{\mult(M_{\alpha},R^{1/p^e})}{p^{de}} \] exists, and is strictly positive. Around the same time, Seibert \cite{Seibert} studied a similar problem and proved the existence of the previous limit, assuming that $R$ has finite Cohen-Macaulay type.
\par As already pointed out for the F-signature function, in this article we are interested in studying the functions $\mult(M_{\alpha},R^{1/p^e})$, rather than the asymptotic behavior of \ $\frac{\mult(M_\alpha, R^{1/p^e})}{p^{de}}$.
\subsection{Non-modular representation theory in positive characteristic} In this subsection, we recall some basic definitions and results on non-modular representation theory in positive characteristic.
\par We fix an algebraically closed field $\Bbbk$ of positive characteristic $p$ and a finite subgroup $G\subseteq\mathrm{Gl}(d,\Bbbk)$ such that $p\nmid |G|$. When we say that $(V,\rho)$ is a $\Bbbk$-representation of $G$, we will always mean a finite-dimensional $\Bbbk$-linear representation of $G$, i.e., a group homomorphism $\rho:G\rightarrow\mathrm{Gl}(V)$, where $V$ is a finite-dimensional $\Bbbk$-vector space. By abuse of notation, we will sometimes just call $V$ the representation, meaning that a map $\rho$ is given as well. The dimension of the representation is just the $\Bbbk$-dimension of $V$. Thanks to Mashke's theorem, the category of $\Bbbk$-representations of $G$ has the Krull-Remak-Schmidt property, with the indecomposable objects being the irreducible representations. In other words, any representation $V$ can be uniquely decomposed (up to isomorphism) as a direct sum of irreducible representations: \begin{equation*} V\cong V_0^{c_0}\oplus\cdots\oplus V_r^{c_r}, \end{equation*} where $V_0,\dots,V_r$ are pairwise non-isomorphic irreducible representations. \begin{Notation} The natural number $c_i$ is called the \textit{multiplicity of} $V_i$ inside $V$, and we denote it by $\mult(V_i,V)=c_i$. We will use the notation $V_0$ to denote the trivial representation of $G$ given by $g\in G\mapsto 1\in \mathrm{Gl}(1,\Bbbk)=\Bbbk^*$. \end{Notation} Finally, we recall that the number of non-isomorphic irreducible $\Bbbk$-representations of $G$ is finite and equal to the number of conjugacy classes of $G$.
\begin{Definition}[Frobenius twist] Let $\Bbbk$ be a perfect field, and $V$ be a $\Bbbk$-vector space. For any positive integer $e$, we denote by $V^{1/p^e}=\{v^{1/p^e}: \ v\in V\}$ the $\Bbbk^{1/p^e} = \Bbbk$-vector space with sum and scalar multiplication given by \[ \displaystyle v_1^{1/p^e} + v_2^{1/p^e} = (v_1+v_2)^{1/p^e}, \mbox{ and} \ a\cdot v_1^{1/p^e} =(a^{p^e}v_1)^{1/p^e} \] for $a\in \Bbbk$ and $v_1^{1/p^e}, v_2^{1/p^e} \in V^{1/p^e}$. If $V$ is a $\Bbbk$-representation of a group $G$, then the composition $G\hookrightarrow \mathrm{Gl}(V)\xrightarrow{\Phi}\mathrm{Gl}(V^{1/p^e})$ shows that $V^{1/p^e}$ is also a representation of $G$, where $\Phi$ is given by $\Phi(g)(v^{1/p^e})=\ (gv)^{1/p^e}$, for $g\in G$, $v\in V$. We call this representation the \emph{$e$-th Frobenius twist} of $V$. \end{Definition}
\begin{Remark}
Let $v_1,\dots,v_s$ be a basis of $V$, and assume that the representation $V$ of $G$ is given by a matrix $(f_{i,j}(g))$, where $f_{i,j}(g) \in \Bbbk$ for all $g \in G$. Explicitly, this means that for $g \in G$ we have $g\cdot v_j=\sum_{i=1}^sf_{i,j}(g)v_j$.
Since $\Bbbk$ is algebraically closed, the elements $v_1^{1/p^e},\dots,v_s^{1/p^e}$ form a $\Bbbk$-basis of $V^{1/p^e}$, and the matrix representation of the Frobenius twist $V^{1/p^e}$ is given by $\left(f_{i,j}(g)^{1/p^e}\right)$. \end{Remark} \begin{Remark} \label{Remark_order_roots} Observe that, if $(f_{i,j}(g))$ is in diagonal form, then every element $f_{i,i}(g)$ that appears on the main diagonal is a primitive $m$-th root of unity in $\Bbbk$, where $m$ divides the order of $g$ in $G$. Since $\Bbbk$ is algebraically closed, and $p$ does not divide $m$, the map $(-)^{1/p^e}:\mu_m(\Bbbk)\rightarrow\mu_m(\Bbbk)$ is an isomorphism of groups, where $\mu_m(\Bbbk)$ denotes the group of $m$-th roots of unity in $\Bbbk$. In particular, $f_{i,i}(g)^{1/p^e}$ is also a primitive $m$-th root of unity in $\Bbbk$. \end{Remark}
\par We fix an isomorphism $\phi:\mu_{|G|}(\Bbbk)\rightarrow\mu_{|G|}(\mathbb{C})$ between the groups of $|G|$-th roots of unity in $\Bbbk$ and $|G|$-roots of unity in $\mathbb{C}$. Let $(V,\rho)$ be a $\Bbbk$-representation of $G$ of dimension $s\geq1$ and let $g$ be an element of $G$. Since $G$ is finite and $\Bbbk$ is algebraically closed, the matrix $\rho(g)$ is diagonalizable in $\Bbbk$. We denote by $\lambda_1,\dots,\lambda_s$ the eigenvalues of $\rho(g)$, counted with multiplicity. Observe that since $\mathrm{ord}_G(g)$ divides $|G|$, $\lambda_1,\dots,\lambda_s$ are elements of $\mu_{|G|}(\Bbbk)$.
\begin{Definition}
The \emph{Brauer character} or simply the \emph{character} of $(V,\rho)$ is the function $\chi_V:G\rightarrow\mathbb{C}$ given by $\chi_{V}(g)=\phi(\lambda_1)+\cdots+\phi(\lambda_s)$. \end{Definition} We collect some properties of Brauer characters in the following proposition.
\begin{Proposition}\label{propertiesofcharacterprop}
Let $V$ be a $\Bbbk$-representations of $G$ with character $\chi_V$, and let $V_i$ be an irreducible $\Bbbk$-representation of $G$ with character $\chi_{V_i}$.
Then the following facts hold:
\begin{enumerate}
\item $\chi_V(\mathrm{Id}_G)=\dim_{\Bbbk}V$, where $\mathrm{Id}_G$ is the identity of $G$;
\item $\chi_V(g^{-1})=\overline{\chi_V(g)}$, the complex conjugate of $\chi_V(g)$, for every $g\in G$;
\item The multiplicity of $V_i$ in $V$ is given by
\[\mult(V_i,V)=\frac{1}{|G|}\sum_{g\in G}\overline{\chi_{V_i}(g)}\cdot\chi_V(g),
\] where $\overline{\chi_{V_i}(g)}$ is the complex conjugate of $\chi_{V_i}(g)$.
\end{enumerate} \end{Proposition}
\par We conclude with the following well-known definition. \begin{Definition}\label{Def-pseudoreflection}
An element $g\in G\subseteq\mathrm{Gl}(d,\Bbbk)$ is called a \emph{pseudoreflection} if the fixed subspace $\{v\in \Bbbk^d: gv=v\}$ has dimension $d-1$. The group $G$ is called \emph{small} if it does not contain any pseudoreflections. \end{Definition}
We observe that, since $G$ is finite and $\Bbbk$ algebraically closed, then $g\in G$ is a pseudo-reflection if and only if it has an eigenvalue $1$ of multiplicity $d-1$ and another eigenvalue $\lambda\neq1$ of multiplicity $1$.
\section{F-signature function of quotient singularities} \label{Section_quotient_sing}
Let $\Bbbk$ be an algebraically closed field of positive characteristic $p$, and let $G$ be a finite small subgroup of $\mathrm{Gl}(d,\Bbbk)$ such that $p\nmid |G|$. We consider a power series ring $S=\Bbbk\llbracket x_1,\dots,x_d\rrbracket$ over $\Bbbk$. The group $G$ acts linearly on $S$ with the action on the variables $x_1,\dots,x_d$ given by matrix multiplication. This defines a unique $\Bbbk$-representation of $G$ of dimension $d$, which is called \emph{fundamental representation} of $G$. We denote by $R=S^G$ the ring of invariants under this action. This is a $d$-dimensional complete normal domain, and it is called a \emph{quotient singularity}. ADE singularities are $2$-dimensional quotient singularities where $G\subseteq\mathrm{Sl}(2,\Bbbk)$; see Examples \ref{Ex_E6}, and \ref{Ex_A_{n-1}} for some explicit rings of this form.
\par Smith and Van den Bergh showed that quotient singularities have FFRT. More precisely, let $V_0,\dots,V_r$ be a complete set of non-isomorphic irreducible representations of $G$ and let $M_{\alpha}=(S\otimes_{\Bbbk}V_{\alpha})^G$ for $\alpha=0,\dots,r$. In \cite{SmithVDB}, they prove that $R$ has FFRT by the set $\mathcal{N}=\{M_0,\dots,M_r\}$, that is, for every $e\in\mathbb{N}$ the $R$-module $R^{1/p^e}$ is isomorphic to a finite direct sum of elements of $\mathcal{N}$. $R$-modules of the form $M=(S\otimes_{\Bbbk}W)^G$, where $W$ is a (not necessarily irreducible) representation of $G$, are called \textit{modules of covariants}. Direct sums of modules of covariants are still modules of covariants, therefore by Smith and Van den Bergh's result, $R^{1/p^e}$ is a module of covariants as well. We are interested in its decomposition into irreducible modules.
\begin{Remark} The functor $W\mapsto(S\otimes_{\Bbbk}W)^G$, which sends a $\Bbbk$-representation $W$ of $G$ into the corresponding module of covariants, is called \textit{Auslander correspondence}. This gives a one to one correspondence between irreducible $\Bbbk$-representations of $G$ and indecomposable $R$-direct summands of $S$. Moreover, one has $\dim_{\Bbbk}W=\rank_R(S\otimes_{\Bbbk}W)^G$ (see \cite{Auslander} for the original proof in dimension $2$ or \cite[Chapter 5]{LeWi} for a generalization to arbitrary dimension). \end{Remark}
\begin{Theorem}[Smith-Van den Bergh]\label{theorem-SmithVandenBerg2}
For any $e\in\mathbb{N}$, let $(S/\mathfrak{m}^{[p^e]})^{1/p^e}$ be the Frobenius twist of the representation $S/\mathfrak{m}^{[p^e]}$. Then
\begin{equation*}
R^{1/p^e}\cong\left(S\otimes_{\Bbbk}\left((S/\mathfrak{m}^{[p^e]})^{1/p^e}\right)\right)^G. \end{equation*} Moreover, if $V_{\alpha}$ is an irreducible $\Bbbk$-representation of $G$ and $M_{\alpha}=(S\otimes_{\Bbbk}V_{\alpha})^G$ is the corresponding module of covariants, then
\begin{equation*}
\mult(M_{\alpha},R^{1/p^e})=\mult(V_{\alpha},(S/\mathfrak{m}^{[p^e]})^{1/p^e}).
\end{equation*} \end{Theorem}
\begin{Remark}
Notice that, if $V_0$ is the trivial representation, then $M_0=(S\otimes_{\Bbbk}V_0)^G=R$ and therefore $\mult(V_{0},(S/\mathfrak{m}^{[p^e]})^{1/p^e})=\mult(R,R^{1/p^e})=\frk_R(R^{1/p^e})=FS(e)$ is the F-signature function of $R$. \end{Remark}
\par Hashimoto and Nakajima \cite{HashimotoNakajima} computed the limits \[
\lim_{e \to \infty} \frac{\mult(M_{\alpha},R^{1/p^e})}{p^{de}} = \frac{\rank_RM_{\alpha}}{|G|}. \] The existence of the previous limits is also a consequence of \cite{SmithVDB, Seibert}, and the value for $\alpha=0$, i.e., the F-signature $\s(R)$, had been previously computed by Watanabe and Yoshida \cite{WatanabeYoshida}. However, not much is known about the functions $e\mapsto\mult(M_{\alpha},R^{1/p^e})$. The main result of this section is Theorem \ref{theorem-Fsignaturefunctionquotient}, where we prove that $\mult(M_{\alpha},R^{1/p^e})$ is a quasi-polynomial in $p^e$ and the coefficient of $p^{(d-1)e}$ is always $0$. Before stating our result, we need the following lemma, which is implicit in \cite{HashimotoNakajima}. Since the methods employed will be useful, we present a complete proof here. \begin{Lemma}\label{lemma-F-signaturefunction} Let $G \subseteq {\rm Gl}(d,\Bbbk)$ be as above. For each $g\in G$, we denote by $\lambda_{g,1},\dots,\lambda_{g,d}\in \Bbbk$ its eigenvalues, counted with multiplicity.
Let $V_{\alpha}$ be an irreducible $\Bbbk$-representation of $G$ with Brauer character $\chi_{V_{\alpha}}$ and associated $R$-module of covariants $M_{\alpha}=(S\otimes_{\Bbbk}V_{\alpha})^G$. The multiplicity of $M_{\alpha}$ into $R^{1/p^e}$ can be expressed as
\[
\mult(M_{\alpha}, R^{1/p^e})=\frac{1}{|G|}\sum_{g\in G}\overline{\chi_{V_{\alpha}}(g)}\sum_{(a_1,\ldots,a_d) \in ([0,p^e-1]\cap\mathbb{N})^d}\phi\left(((\lambda_{g,1})^{1/p^e})^{a_1}\cdots((\lambda_{g,d})^{1/p^e})^{a_d}\right).
\] \end{Lemma}
\begin{proof} By Theorem \ref{theorem-SmithVandenBerg2}, the multiplicity $\mult(M_{\alpha}, R^{1/p^e})$ is equal to the multiplicity of the representation $V_{\alpha}$ into the Frobenius twist representation $ (S/\mathfrak{m}^{[p^e]})^{1/p^e}$. By Proposition \ref{propertiesofcharacterprop}, this is equal to \[
\mult(V_{\alpha},(S/\mathfrak{m}^{[p^e]})^{1/p^e}) = \frac{1}{|G|}\sum_{g\in G}\overline{\chi_{V_{\alpha}}(g)}\cdot\chi_{(S/\mathfrak{m}^{[p^e]})^{1/p^e}}(g). \] To compute the previous sum, we fix an element $g$ of $G$. We may assume without loss of generality that the $\Bbbk$-basis
${x_1,\dots,x_d}$ of the fundamental representation is such that each $x_i$ is an eigenvector of $g$ with eigenvalue $\lambda_{g,i}\in \Bbbk$, that is, $gx_i=\lambda_{g,i}x_i$. \par Now, observe that $\{x_1^{a_1}\cdots x_d^{a_d}: (a_1,\ldots,a_d) \in ([0,p^e-1]\cap\mathbb{N})^d\}$ is a $\Bbbk$-basis of $S/\mathfrak{m}^{[p^e]}$, where each element $x_1^{a_1}\cdots x_d^{a_d}$ is an eigenvector of $g$ with eigenvalue $\lambda_{g,1}^{a_1}\cdots \lambda_{g,d}^{a_d}$. It follows that $\{(x_1^{1/p^e})^{a_1}\cdots (x_d^{1/p^e})^{a_d} : (a_1,\ldots,a_d) \in ([0,p^e-1]\cap\mathbb{N})^d\}$ is a basis of the Frobenius twist $(S/\mathfrak{m}^{[p^e]})^{1/p^e}$ as a $\Bbbk^{1/p^e}$-vector space. Since $\Bbbk$ is perfect, it is a $\Bbbk$-basis as well. Moreover, each element $(x_1^{1/p^e})^{a_1}\cdots (x_d^{1/p^e})^{a_d}$ of the previous basis is an eigenvector of $g$ with eigenvalue $(\lambda_{g,1}^{1/p^e})^{a_1}\cdots (\lambda_{g,d}^{1/p^e})^{a_d}$. Thus, the character of $(S/\mathfrak{m}^{[p^e]})^{1/p^e}$ is given by \[ \chi_{(S/\mathfrak{m}^{[p^e]})^{1/p^e}}(g)=\sum_{(a_1,\ldots,a_d) \in ([0,p^e-1]\cap\mathbb{N})^d}\phi\left(((\lambda_{g,1})^{1/p^e})^{a_1}\cdots((\lambda_{g,d})^{1/p^e})^{a_d}\right), \] and the claim is proved. \end{proof}
\begin{Definition}
Let $c\in\{0,\dots,d\}$ and let $g$ be an element of $G \subseteq {\rm Gl}(d,\Bbbk)$. We say that $g$ is a \emph{$c$-pseudoreflection} if it has eigenvalue $1$ with multiplicity $c$, and $d-c$ eigenvalues different from $1$. Equivalently, a $c$-pseudoreflection is an element $g \in {\rm GL}(d,\Bbbk)$ such that $\rank(I_d-g) = d-c$, where $I_d$ is the identity matrix of size $d$. We denote by $G_c$ the subset of $G$ consisting of all $c$-pseudoreflections. \end{Definition}
Note that, since $G\subseteq \mathrm{Gl}(d,\Bbbk)$ is a finite group whose order is invertible in $\Bbbk$, and $\Bbbk$ is algebraically closed, each element of $G$ is diagonalizable. Moreover, observe that we can decompose $G$ as a disjoint union of the sets $G_c$.
\begin{Example} The only $d$-pseudoreflection corresponds to the identity of the group, and a $(d-1)$-pseudoreflection is just a (standard) pseudoreflection, as in Definition \ref{Def-pseudoreflection}. \end{Example}
\begin{Remark} In the literature, $c$-pseudoreflections are sometimes called $(d-c)$-reflections. In particular, (standard) pseudoreflections are sometimes called $1$-reflections, rather than $(d-1)$-pseudoreflections. We decided to adopt this convention in order to facilitate the readability of this article. \end{Remark}
\begin{Theorem}\label{theorem-Fsignaturefunctionquotient}
Let $\Bbbk$ be an algebraically closed field of positive characteristic $p$, and let $G$ be a finite small subgroup of $\mathrm{Gl}(d,\Bbbk)$ such that $p\nmid |G|$. Let $S=\Bbbk\llbracket x_1,\dots,x_d\rrbracket$ be a power series ring, and $R=S^G$ be the ring of invariants under this action. Let $V_{\alpha}$ be an irreducible $\Bbbk$-representation of $G$, and $M_{\alpha}=(S\otimes_{\Bbbk}V_{\alpha})^G$ be the corresponding indecomposable module of covariants. Then, the function $e\mapsto \mult(M_{\alpha},R^{1/p^e})$ has the following shape \[
\mult(M_{\alpha},R^{1/p^e})=\frac{\rank_RM_{\alpha}}{|G|}p^{de}+\varphi_{d-2}^{(\alpha)}p^{(d-2)e}+\cdots+\varphi_{1}^{(\alpha)}p^e+\varphi_{0}^{(\alpha)},
\]
where $\varphi_{c}^{(\alpha)} = \varphi_{c}^{(\alpha)}(e)$ are functions that take values in $\mathbb{Q}$, are bounded, and periodic of period at most $|G|-1$. Moreover, if $G$ does not contain any $c$-pseudoreflections for some $c\in\{0,\dots,d-2\}$, then $\varphi_{c}^{(\alpha)}(e) = 0$. \end{Theorem}
\begin{proof} We fix $e \in \mathbb{N}$. By Lemma \ref{lemma-F-signaturefunction} we can write the multiplicity of $M_{\alpha}$ in $R^{1/p^e}$ as \[
\mult(M_{\alpha},R^{1/p^e})=\frac{1}{|G|}\sum_{g\in G}\overline{\chi_{\alpha}(g)}\sum_{(a_1,\ldots,a_d) \in ([0,p^e-1]\cap\mathbb{N})^d}\phi\left(((\lambda_{g,1})^{1/p^e})^{a_1}\cdots((\lambda_{g,d})^{1/p^e})^{a_d}\right), \] where $\lambda_{g,1},\dots,\lambda_{g,d}$ are the eigenvalues of the element $g\in G$, and $\chi_{\alpha}$ is the character of $V_{\alpha}$.
\par We write the previous sum as \[\begin{split}
&\frac{1}{|G|}\sum_{g\in G}\overline{\chi_{\alpha}(g)}\sum_{(a_1,\ldots,a_d) \in ([0,p^e-1]\cap\mathbb{N})^d}(\phi((\lambda_{g,1})^{1/p^e}))^{a_1}\cdots(\phi((\lambda_{g,d})^{1/p^e}))^{a_d}\\
=&\frac{1}{|G|}\sum_{g\in G}\overline{\chi_{\alpha}(g)}\sum_{(a_1,\ldots,a_d) \in ([0,p^e-1]\cap\mathbb{N})^d}(\xi_{g,e,1})^{a_1}\cdots(\xi_{g,e,d})^{a_d}, \end{split} \]
where $\xi_{g,e,i}=\phi((\lambda_{g,i})^{1/p^e})\in\mathbb{C}$ for all $i=1,\dots,d$. Notice that since $\phi:\mu_{|G|}(\Bbbk)\rightarrow\mu_{|G|}(\mathbb{C})$ and $(-)^{1/p^e}:\mu_{|G|}(\Bbbk)\rightarrow\mu_{|G|}(\Bbbk)$ are group isomorphisms, the order of $\xi_{g,e,i}$ as root of unity in $\mathbb{C}$ is the same as the order of $\lambda_{g,i}$ in $\Bbbk$. \par Now, rewrite the sum as \begin{equation}\label{eq_F-signature1}\begin{split}
&\frac{1}{|G|}\sum_{g\in G}\overline{\chi_{\alpha}(g)}\sum_{a_1=0}^{p^e-1}(\xi_{g,e,1})^{a_1}\cdots\sum_{a_d=0}^{p^e-1}(\xi_{g,e,d})^{a_d}\\
=&\frac{1}{|G|}\sum_{g\in G}\overline{\chi_{\alpha}(g)}\prod_{i=1}^d\sum_{a_i=0}^{p^e-1}(\xi_{g,e,i})^{a_i}\\
=&\frac{1}{|G|}\sum_{c=0}^d\sum_{g\in G_c}\overline{\chi_{\alpha}(g)}\prod_{i=1}^d\sum_{a_i=0}^{p^e-1}(\xi_{g,e,i})^{a_i}. \end{split} \end{equation} The last equality follows from the disjoint decomposition $G=\bigsqcup_{c=0}^dG_c$, where $G_c$ is the set of $c$-pseudoreflections.
\par We analyze the last formula more closely. First, observe that each sum of the form $\sum_{a_i=0}^{p^e-1}(\xi_{g,e,i})^{a_i}$ is equal to $p^e$ if $\lambda_{g,i}=1$. In fact, in this case, $\xi_{g,e,i}=1$ for all $e$. On the other hand, if $\lambda_{g,i} \ne 1$, then the function $e \mapsto \left|\sum_{a_i=0}^{p^e-1}(\xi_{g,e,i})^{a_i}\right|$ is bounded by a constant. In fact, $\lambda_{g,i} \ne 1$ if and only if $\xi_{g,e,i} \ne 1$ for all $e$, by Remark \ref{Remark_order_roots}.
We fix $n=|G|$, and write $p^e=kn+r_e$, with $0 < r_e < n$. Since $\xi_{g,e,i}\ne 1$, we have $\sum_{a_i=jn}^{(j+1)n-1} (\xi_{g,e,i})^{a_i} = 0$ for all $j=0,\ldots, k-1$, and thus $\sum_{a_i=0}^{p^e-1} (\xi_{g,e,i})^{a_i} = \sum_{a_i=kn}^{p^e-1} (\xi_{g,e,i})^{a_i}$.
\par Now, fix $c\in\{0,\dots,d\}$. Following the previous argument, for each, $g\in G_c$ and all $e$ we have exactly $c$ eigenvalues in the set $\{\xi_{g,e,1},\dots,\xi_{g,e,d}\}$ which are equal to $1$. Therefore, \[ \prod_{i=1}^d\sum_{a_i=0}^{p^e-1}(\xi_{g,e,i})^{a_i}=\eta_{g,c}p^{ce} \]
for some function $\eta_{g,c} = \eta_{g,c}(e)$ that, for all $e \in \mathbb{N}$, satisfies $|\eta_{g,c}(e)| < C$ for some $C>0$ independent of $e$. Taking the sum over all $g\in G_c$, we obtain \[
\frac{1}{|G|}\sum_{g\in G_c}\overline{\chi_{\alpha}(g)}\prod_{i=1}^d\sum_{a_i=0}^{p^e-1}(\xi_{g,e,i})^{a_i}=\varphi_{c}^{(\alpha)}p^{ce}, \]
where $\varphi_{c}^{(\alpha)} = \varphi_{c}^{(\alpha)}(e) = \frac{1}{|G|}\sum_{g\in G_c}\overline{\chi_{\alpha}(g)} \eta_{g,c}(e)$. Note that $\left|\varphi_c^{(\alpha)}(e)\right|$ can be also bounded by a constant independent of $e$, because $|G_c|$ and $\left|\overline{\chi_{\alpha}(g)}\right|$ are independent of $e$. Inserting the last formula in \eqref{eq_F-signature1}, we get \[\mult(M_{\alpha},R^{1/p^e})=\sum_{c=0}^d\varphi_{c}^{(\alpha)}p^{ce}. \]
This shows that $\mult(M_\alpha,R^{1/p^e})$ is a quasi-polynomial; the fact that $\mult(M_\alpha,R^{1/p^e}) \in \mathbb{N}$ for all $e \in \mathbb{N}$ now gives that that the functions $\varphi_c^{(\alpha)}$ take values in $\mathbb{Q}$. In addition, it is clear from the description of $\varphi_c^{(\alpha)}$ that $G_c=\emptyset$ implies $\varphi_{c}^{(\alpha)}=0$. Therefore, since $G$ does not contain any $(d-1)$-pseudoreflections, we have $G_{d-1}=\emptyset$, and consequently $\varphi_{d-1}^{(\alpha)}(e)=0$ for all $e \in \mathbb{N}$. Furthermore, $\varphi_{d}^{(\alpha)}=\frac{\rank_RM_{\alpha}}{|G|}$ follows from the fact that $G_d=\{\mathrm{Id}_G\}$, and $\overline{\chi_{\alpha}(\mathrm{Id}_G)}=\dim_{\Bbbk}V_{\alpha}=\rank_RM_{\alpha}$.
\par It is left to show that the functions $e \mapsto \varphi_{c}^{(\alpha)}(e)$ are periodic. It is enough to show that each function $e \mapsto \sum_{a_i=0}^{p^e-1} (\xi_{g,e,i})^{a_i}$ is periodic, for all $g$ and $i$ such that $\lambda_{g,i} \ne 1$. Since $p \nmid n$, where $n=|G|$, we can find $e'$ such that $p^{e'} \equiv 1$ modulo $n$. Note that we can choose $e'$ to be the order of $p$ in the group of units of $\mathbb{Z}/(n)$; in particular, we can assume that $e' \leq |G|-1$. Observe that $\lambda_{g,i}^{p^{e'}} = \lambda_{g,i}$, because $\lambda_{g,i}^n=1$. Since $(\phi^{-1}(\xi_{g,1,i}))^p=\lambda_{g,i}$, we get $(\phi^{-1}(\xi_{g,1,i}))^{pp^{e'}} = \lambda_{g,i} = (\phi^{-1}(\xi_{g,1,i}))^p$, and it follows that $\xi_{g,1,i} = \xi_{g,e'+1,i}$. Finally, since this is true for all $g$ and $i$ such that $\lambda_{g,i} \ne 1$, we have that $\varphi_{c}^{(\alpha)}(e) = \varphi_{c}^{(\alpha)}(e+e')$ for all $e \in \mathbb{N}$. \end{proof}
We postpone to Section \ref{section:examples} the presentation of some examples, which show how Theorem \ref{theorem-Fsignaturefunctionquotient} can be used to compute the F-signature function of specific quotient singularities (see e.g. Example~\ref{Ex_E6} and Example~\ref{Ex_3-VeroneseD6}). \par We have shown in Theorem \ref{theorem-Fsignaturefunctionquotient} that $G_c=\emptyset$ implies that $\varphi_c^{(\alpha)}=0$ for all $\alpha$. We can prove a converse statement, provided $\alpha=0$. In other words, the vanishing of the function $\varphi_c^{(0)}$ is equivalent to the absence of $c$-pseudoreflections. In order to simplify the notation, in the sequel when no confusion may arise, we will simply denote the function $\varphi_c^{(0)}$ by $\varphi_c$.
\begin{Proposition}\label{prop-Fsignaturefuncionquotient} With the notations of Theorem \ref{theorem-Fsignaturefunctionquotient}, for any $c\in\{0,\dots,d-2\}$, we have $\varphi_{c}(e) = 0$ for all $e\in\mathbb{N}$ if and only if $G$ does not contain $c$-pseudoreflections. \end{Proposition} \begin{proof} \par The \textit{if} part of the statement has been proved in Theorem \ref{theorem-Fsignaturefunctionquotient}, so it remains to prove the \textit{only if} part.
For this, fix $e'$ such that $p^{e'}\equiv1$ modulo $|G|$.
For $g\in G_c$ we denote by $\lambda_{g,1},\dots,\lambda_{g,d}$ its eigenvalues, and we set $\xi_{g,e',i}=\phi((\lambda_{g,i})^{1/p^{e'}})\in\mathbb{C}$ as in the proof of Theorem \ref{theorem-Fsignaturefunctionquotient}. Since $g$ is a $c$-pseudoreflection, there will be exactly $c$ values from the set $\{\xi_{g,e',1},\dots,\xi_{g,e',d}\}$ that are equal to $1$. Without loss of generality, we may assume that $\xi_{g,e',1}=\dots=\xi_{g,e',c}=1$. Using the formula for $\varphi_c$ obtained inside the proof of Theorem \ref{theorem-Fsignaturefunctionquotient}, the assumption that $\varphi_c(e')=0$ gives \begin{equation*} \begin{split}
0&=\frac{1}{|G|}\sum_{g\in G_c}\prod_{i=1}^d\sum_{a_i=0}^{p^{e'}-1}(\xi_{g,p^{e'},i})^{a_i}\\
&=\frac{1}{|G|}\sum_{g\in G_c}\prod_{i=c+1}^d\sum_{a_i=0}^{p^{e'}-1}(\xi_{g,p^{e'},i})^{a_i}(p^{e'})^c\\
&=\frac{1}{|G|}\sum_{g\in G_c}\left(\prod_{i=c+1}^d1\right)(p^{e'})^c\\
&=\frac{1}{|G|}|G_c|(p^{e'})^c, \end{split} \end{equation*}
which implies $|G_c|=0$. In the previous chain of equalities from the second to the third line we used the fact that \[\sum_{a_i=0}^{p^{e'}-1}(\xi_{g,e',i})^{a_i}=\xi_{g,e',i}^{0}=1, \]
which is true because of our choice of $p^{e'} \equiv 1$ modulo $|G|$. \end{proof}
The following Corollary is a direct consequence of the proof of Proposition \ref{prop-Fsignaturefuncionquotient}. \begin{Corollary}\label{corollary-fsignaturecoprime}
For any $e\in\mathbb{N}$ such that $p^e\equiv 1$ modulo $|G|$, we have
\begin{equation*}
FS(e)=\frac{1}{|G|}p^{de}+\frac{|G_ {d-2}|}{|G|}p^{(d-2)e}+\cdots+\frac{|G_1|}{|G|}p^e+\frac{|G_0|}{|G|}.
\end{equation*}
In particular, if $p\equiv 1$ modulo $|G|$, then this is true for all $e\in\mathbb{N}$ so the F-signature function of $R$ is a polynomial in $p^e$ with constant coefficients. \end{Corollary}
\begin{Remark} \label{rem_graded}
We state the results of this section in the complete local case; however, analogous versions are true in the graded setting.
More precisely, let $\Bbbk$ be an algebraically closed field of characteristic $p>0$, let $S=\Bbbk[x_1,\dots,x_d]$ with $\deg x_i=1$, and let $G\subseteq\mathrm{Gl}(d,\Bbbk)$ be a finite small group with $p\nmid |G|$. We consider the corresponding invariant ring $R=S^G$, which is $\mathbb{N}$-graded.
The multiplicity functions $\mult(M_{\alpha},R^{1/p^e})$ are defined similarly to the local case (see \cite[Section 3.1]{SmithVDB} for more details).
The Auslander correspondence between irreducible $\Bbbk$-representations of $G$ and graded indecomposable $R$-direct summands of $S$ is true also in this setting (see \cite[Section 4]{IyamaTakahashi} for a proof) and a graded version of Theorem \ref{theorem-SmithVandenBerg2} has been proved by Hashimoto and Nakajima \cite[Proposition 2.2]{HashimotoNakajima}.
Therefore, Theorem \ref{theorem-Fsignaturefunctionquotient}, Proposition \ref{prop-Fsignaturefuncionquotient}, and Corollary \ref{corollary-fsignaturecoprime} hold in this setting as well with analogous proofs. \end{Remark}
\section{F-signature function of cyclic quotient singularities}\label{section:cyclicquotient} Let $S=\Bbbk\ps{x_1,\ldots,x_d}$, where $\Bbbk$ is an algebraically closed field of characteristic $p>0$. Let $G \subseteq \mathrm{Gl}(d,\Bbbk)$ be a finite small subgroup of order $n$, with $p \nmid n$. Throughout this section, we assume that $G$ is cyclic. In particular, we may assume that $G$ is generated by an element $g=\mathrm{diag}(\lambda^{t_1},\dots,\lambda^{t_d})$, where $\lambda\in \Bbbk$ is a primitive $n$-th root of unity and $t_1,\dots,t_d$ are non-negative integers. It is harmless to assume $\gcd(t_1,\ldots,t_d,n)=1$. Moreover, since $G$ is small, we must have $\gcd(t_{j_1},\ldots,t_{j_{d-1}},n)=1$ for all subsets $\{j_1,\ldots,j_{d-1}\} \subseteq [d]$ of cardinality $d-1$. The ring $R=S^G$ of invariants with respect to the action of $G$ is called a cyclic quotient singularity, which we will denote by $\frac{1}{n}(t_1,t_2,\ldots,t_d)$. In this setup, we can apply Theorem \ref{theorem-Fsignaturefunctionquotient} to describe the functions $e \to \mult(M_\alpha,R^{1/p^e})$. However, given the special structure of the group $G$, we can say more about these functions.
\begin{Remark} \label{Remark_irreps_cyclic} When $G$ is a cyclic small group of order $n$, there are precisely $n$ irreducible $\Bbbk$-representations $V_{0},\dots,V_{n-1}$ of $G$, and they all have rank $1$. Furthermore, for $\alpha\in\{0,\dots,n-1\}$, the Brauer character $\chi_{V_\alpha}$ will be of the form $\xi^j$ for some $0 \leq j \leq n-1$ and some primitive $n$-th root of unity $\xi\in\mathbb{C}$. We will then assume, without loss of generality, that the irreducible $\Bbbk$-representations are such that $\chi_{V_\alpha} = \xi^{\alpha}$, for all $\alpha$. \end{Remark}
In what follows, we denote by $\mathcal{P}=[0,1]^d$ the unitary cube of side $1$ inside $\mathbb{R}^d$ and, for each $\alpha \in \{0,\ldots,n-1\}$, we let $\mathcal{A}^{(\alpha)}$ be the lattice
\begin{equation}\label{eq:latticeA}
\displaystyle \mathcal{A}^{(\alpha)}=\{(a_1,\ldots,a_{d}) \in \mathbb{Z}^d : t_1a_1 + t_2a_2 + \ldots + t_{d}a_{d} \equiv \alpha \mod n\}.
\end{equation} We start by relating the functions $e \mapsto \mult(M_\alpha, R^{1/p^e})$ to the number of lattice points inside multiples of the cube $\mathcal{P}$. \begin{Proposition}\label{prop-Fsignatureofcyclic} Let $R$ be a $\frac{1}{n}(t_1,t_2,\dots,t_d)$-cyclic singularity over an algebraically closed field, and let $e\in\mathbb{N}$. Then
\[
\displaystyle \mult(M_\alpha,R^{1/p^e})= |(p^e-1)\mathcal{P}\cap \mathcal{A}^{(\alpha)}|,
\] \end{Proposition}
\begin{proof} Since $G$ is cyclic, its elements can be written as $g^j$ for $j=0,\dots,n-1$. In particular, observe that the eigenvalues of $g^j$ are $\lambda^{jt_1}$, $\lambda^{jt_2},\dots,\lambda^{jt_d}$. Let $\xi_e = \phi(\lambda^{1/p^e})$ be the image in $\mathbb{C}$ of the unique $p^e$-root of $\lambda$ in $\Bbbk$. Notice that $\xi_e$ is a primitive complex $n$-th root of unity, so by Remark \ref{Remark_irreps_cyclic} we may assume that $\chi_{V_\alpha}=\xi_e^\alpha$. Observe that $\overline{\chi_{V_\alpha}} = \overline{\xi_e^\alpha} = \xi_e^{-\alpha}$. Then from Lemma \ref{lemma-F-signaturefunction} we obtain \begin{align*} \mult(M_\alpha,R^{1/p^e}) & = \frac{1}{n} \sum_{j=0}^{n-1} \xi_e^{-j\alpha} \sum_{\tiny{(a_1,\ldots,a_d) \in ([0,p^e-1]\cap \mathbb{N})^d}} \xi_e^{j(t_1a_1+t_2a_2 + \ldots + t_{d}a_{d})} \\ & = \frac{1}{n} \sum_{j=0}^{n-1} \sum_{\tiny{(a_1,\ldots,a_d) \in ([0,p^e-1]\cap \mathbb{N})^d}} \xi_e^{j(t_1a_1+t_2a_2 + \ldots + t_{d}a_{d}-\alpha)}. \end{align*} Since $\sum_{j=0}^{n-1} \xi_e^{ij} = 0$ for all $i \not\equiv 0$ modulo $n$, the only contribution to the sum above is for $(a_1,\ldots,a_d)$ such that $t_1a_1+t_2a_2 + \ldots + t_{d}a_{d} \equiv \alpha$ modulo $n$, in which case $\xi_e^{t_1a_1+t_2a_2 + \ldots + t_{d}a_{d}-\alpha} = 1$. Therefore \begin{align*} \mult(M_\alpha,R^{1/p^e}) & = \frac{1}{n} \sum_{j=0}^{n-1} \sum_{\tiny{\begin{array}{c} t_1a_1+t_2a_2 + \ldots + t_{d}a_{d} \equiv \alpha \mod n \\ (a_1,\ldots,a_d) \in ([0,p^e-1]\cap \mathbb{N})^d \end{array}}} 1 \\
& = |\{(a_1,\ldots,a_{d}) \in ([0,p^e-1] \cap \mathbb{N})^d : t_1a_1 + t_2a_2 + \ldots + t_{d}a_{d} \equiv \alpha \mod n\}|\\
& = |(p^e-1)\mathcal{P} \cap \mathcal{A}^{(\alpha)}|. \end{align*} \end{proof}
Proposition \ref{prop-Fsignatureofcyclic} exhibits a connection between the F-signature function of cyclic quotient singularities and Erhart functions of rational polytopes. This is not surprising: in fact, cyclic quotient singularities are toric, and Von Korff proved that the F-signature function of toric rings is an Erhart function \cite{VonKorff} (see also \cite{Bruns} for related results). However, while in Von Korff's approach the lattice is $\mathbb{Z}$ and the polytope is not a cube, in Proposition \ref{prop-Fsignatureofcyclic} the lattice is more complicated, but the polytope is a cube. The advantage of our method is that it allows to compute the coefficients of the quasi-polynomial $\mult(M_\alpha,R^{1/p^e})$ more explicitly, and to relate them to properties of the group $G$ (see Theorem \ref{theorem-Fsignaturecyclic}).
\subsection{Congruences and partitions} \label{Subsection_congruence} In this subsection we recall some well-known facts about congruences modulo an integer. The results and the methods of this subsection are general in nature and independent of the cyclic quotient singularities setting. However, the notation we introduce and lemmas we prove here will be used in the rest of Section \ref{section:cyclicquotient}.
The following Lemma about number of solutions of certain congruence relations is a well-known classical result, therefore we omit a proof. \begin{Lemma}\label{idolo} Let $t_1,\ldots,t_i,n,b$ be non-negative integers, with $n\ne 0$, and $g=\gcd(t_1,\ldots,t_i,n)$ that divides $b$. The congruence $t_1x_1 + \ldots + t_ix_i \equiv b$ modulo $n$ has $g \cdot n^{i-1}$ incongruent solutions $(x_1,\ldots,x_i) \in \mathbb{Z}/(n)^{\oplus i}$. \end{Lemma}
We now introduce some notation that will largely be used in the rest of this section. \begin{Notation} \label{notation_Gamma}
Fix positive integers $d,n$ and $p>1$, with $\gcd(p,n)=1$. Fix a natural number $e$, and write $p^e=nk+r_e$, with $0 < r_e < n$. For every $0 \leq i \leq d$ we let $\Gamma_i =\{ J \subseteq [d] : |J|=i\}$. For $J \in \Gamma_i$, we let $C_J = \prod_{j=1}^d \left([0,b_j] \cap \mathbb{N}\right) \subseteq \mathbb{N}^d$, with \[ \displaystyle b_j= \left\{\begin{array}{ll} n-1 & \mbox{ if } j \in J \\ \\
r_e-1 & \mbox{ if } j \notin J
\end{array}
\right. \] For example, we have $C_{[d]} = ([0,n-1] \cap \mathbb{N})^d$, and $C_{\emptyset}= ([0,r_e-1]\cap \mathbb{N})^d$.
Now let $1 \leq i \leq d$. For $J = \{j_1,\ldots,j_i\} \subseteq \Gamma_i$, with $j_1 < j_2 < \ldots, j_i$, we let $\sigma_J :J \to [i]$ be the map defined as $\sigma_J(j_\ell) = \ell$ for all $1 \leq \ell \leq i$. For $\underline{s}=(s_1,\ldots,s_i) \in \mathbb{N}^i$ we define a vector $v_{J,\underline{s}} = ((v_{J,\underline{s}})_1,\ldots,(v_{J,\underline{s}})_d) \in \mathbb{N}^d$ in the following way: \[ \displaystyle (v_{J,\underline{s}})_j= \left\{ \begin{array}{ll} s_{\sigma_J(j)} & \mbox{ if } j \in J \\ \\ k & \mbox{ if } j\notin J \end{array} \right. \] Finally, for convenience, we set $([0,k-1]\cap\mathbb{N})^0=\{\star\}$, and $v_{\emptyset,\star} = (k,\ldots,k)$. \end{Notation} Given a set $C \subseteq \mathbb{N}^d$ and a $d$-uple $(a_1,\ldots,a_d)$, we denote by $C+(a_1,\ldots,a_d)$ the Minkowski sum $\{(c_1+a_1,\ldots,c_d+a_d) : (c_1,\ldots,c_d)\in C\}$. We will call it the shift of the set $C$ by $(a_1,\ldots,a_d)$. With the notation we have introduced, we can partition the set $([0,p^e-1]\cap \mathbb{N})^d$ into shifts of sets of the form $C_J$, for $J \subseteq [d]$. \begin{Lemma} \label{partition} We have the following partition: \begin{align*} \displaystyle \left([0,p^e-1] \cap \mathbb{N}\right)^d & = \bigsqcup_{i=0}^d \bigsqcup_{J \in \Gamma_i} \left(\bigsqcup_{\tiny{ \underline{s} \in ([0,k-1] \cap \mathbb{N})^i}} (C_J + nv_{J,\underline{s}}) \right). \end{align*} \end{Lemma} Note that, on the right-hand side of the equation, the sets $C_J$ and the vectors $v_{J,\underline{s}}$ depend on $e$. At this stage, we have decided to keep the dependence of these objects on $e$ implicit, since we believe this should not be source of confusion, while adding it to the notation would only make the statement harder to read. Before starting the proof, to better illustrate our notation and the statement of the Lemma, we display the partition when $d=p=2$ and $e=n=3$, in which case we have $k=r_e=2$. \begin{figure}
\caption{Case $d=p=2$, $e=n=3$}
\label{figure}
\end{figure}
\begin{proof}[Proof of Lemma \ref{partition}] For $\underline{s} \in ([0,k-1]\cap \mathbb{N})^i$, each $C_J + nv_{J,\underline{s}}$ is contained in $([0,p^e-1]\cap \mathbb{N})^d$. Therefore, the union of such sets is contained in $([0,p^e-1]\cap\mathbb{N})^d$ as well. To see the other containment, let $(a_1,\ldots,a_d) \in ([0,p^e-1]\cap \mathbb{N})^d$. Let $J = \{j_1,\ldots,j_i\}$, with $j_1< \ldots < j_i$, be the set of $j \in [d]$ such that $a_j<kn$. If $J=\emptyset$, then $(a_1,\ldots,a_d) \in ([kn,p^e-1]\cap \mathbb{N})^d = C_\emptyset + v_{\emptyset,\star}$. If $J \ne \emptyset$, for each $j_\ell \in J$, write $a_{j_\ell} = s_{j_\ell}n + r_{j_\ell}$, with $0 \leq r_{j_\ell} < n$, and set $\underline{s} = (s_{j_1},\ldots,s_{j_i})$. With these choices, one can check that $(a_1,\ldots,a_d) \in C_J+nv_{J,\underline{s}}$. It is also straightforward, and we leave it to the reader, to check that all the sets $C_J + nv_{J,\underline{s}}$ are disjoint. \end{proof}
We need some additional notation. \begin{Notation} \label{notation_psi} Fix positive integers $d,n$ and $p$, with $\gcd(p,n)=1$, and non-negative integers $t_1,\dots,t_d$. Let $0\leq i< d$ be an integer, and $J \in \Gamma_i$. Write $J=\{j_1,\ldots,j_i\}$, and let $g_J = \gcd(t_{j_1},\ldots,t_{j_i},n)$, where for convenience we set $g_\emptyset = n$. We let $[d] \smallsetminus J = \{j_{h_1},\ldots,j_{h_{d-i}}\}$.
Given a positive integer $e$, write $p^e=kn+r_e$, with $0 < r_e<n$. For $\alpha \in \{0,\ldots,n-1\}$, let \[ \displaystyle \mathcal{B}_J^{(\alpha)}(e)= \left\{(a_{1},\ldots,a_{d-i}) \in ([0,r_e-1] \cap \mathbb{N})^{d-i} : \sum_{\ell=1}^{d-i} a_\ell t_{h_\ell} \equiv \alpha\mod g_J\right\}. \] Finally, for all $e \in \mathbb{N}$, we define $\theta_J^{(\alpha)}(e)$ to be the cardinality of the set $\mathcal{B}_J^{(\alpha)}(e)$. \par In other words, $\theta_J^{(\alpha)}(e)$ counts the number of incongruent $(d-i)$-uples $(\overline{a_1},\ldots,\overline{a_{d-i}})$ in $\mathbb{Z}/(r_e)^{\oplus (d-i)}$ such that their lifts $(a_1,\ldots,a_{d-i})$ to $\mathbb{Z}$, with $0\leq a_\ell \leq r_e-1$, satisfies $\sum_{\ell =1}^{d-i} a_\ell t_{h_\ell} +ag_J = \alpha$ for some $a \in \mathbb{Z}$. For convenience, we set $\theta_{[d]}^{(\alpha)}(e) = 1$ for all $\alpha=0,\ldots, n-1$, and $e \in \mathbb{N}$. For $i\in\{0,\ldots,d\}$ and $\alpha \in\{ 0,\ldots,n-1\}$, consider the following functions \[ e \in \mathbb{N} \mapsto \psi_i^{(\alpha)}(e) = \sum_{J \in \Gamma_i} g_J\theta_J^{(\alpha)}(e). \] When no confusion may arise, we will simply denote $\theta_J^{(0)}$ and $\psi_J^{(0)}$ by $\theta_J$ and $\psi_J$. \end{Notation} The functions $\theta_J^{(\alpha)}$ count the number of solutions of certain diophantine equations in linearly bounded regions. This problem has been studied in the context of \emph{integer linear programming}. The interested reader may consult \cite{IntegerProgrammingBook} for an introduction to this research area. A recursive formula for $\theta_J^{(\alpha)}$ can be also deduced from \cite{Faaland}.
\begin{Remark} \label{rem_psi} For all $J \in \Gamma_i$, $\alpha\in\{0,\ldots,n-1\}$, and $e \in \mathbb{N}$, we have bounds $0 \leq \theta_J^{(\alpha)}(e) \leq r_e^{d-i}$. The upper bound is clear from the range where we $(a_1,\ldots,a_{d-i})$ varies. When $\alpha=0$, the lower bound can be improved to $1 \leq \theta_J^{(0)}(e)$ for all $J$ and all $e$, since the $(d-i)$-uple $(0,\ldots,0)$ always belongs to $\mathcal{B}_J^{(0)}(e)$. Note that the upper bound $\theta_J^{(\alpha)}(e) = r_e^{d-i}$ is always achieved, independently of $\alpha$, when $g_J = 1$. Moreover, if $r_e=1$, then for all $J$ and all $e$ we have $\theta^{(\alpha)}_J(e) = 0$ when $\alpha \ne 0$, while $\theta^{(0)}_J(e) = 1$. \end{Remark} \begin{Proposition} \label{counting C_J} We adopt the notation introduced in \ref{notation_Gamma} and \ref{notation_psi}. For $\alpha\in\{0,\dots,n-1\}$, we consider the lattice $ \mathcal{A}^{(\alpha)}$ defined in \eqref{eq:latticeA}. Let $0 \leq i \leq d$ be an integer, and $J \in \Gamma_i$. Write $J=\{j_1,\ldots,j_i\}$ and let $g_J = {\rm gcd}(t_{j_1},\ldots,t_{j_i},n)$. For $e \in \mathbb{N}$, write $p^e=kn+r_e$, with $0 < r_e < n$. Then \[
\displaystyle \left|\left(C_J + nv_{J,\underline{s}}\right) \cap \mathcal{A}^{(\alpha)} \right| = |C_J \cap \mathcal{A}^{(\alpha)}| = \theta_J^{(\alpha)}(e)g_Jn^{i-1}. \] for all $\underline{s} \in ([0,k-1] \cap \mathbb{N})^i$. \end{Proposition}
\begin{proof} To prove the first equality, let $\underline{s}\in ([0,k-1] \cap \mathbb{N})^i$ be arbitrary. Then \begin{align*} (a_1,\ldots,a_d) \in \left(C_J + nv_{J,\underline{s}}\right) \cap \mathcal{A}^{(\alpha)} & \Longleftrightarrow \left\{ \begin{array}{ll} (a_1,\ldots,a_d) -nv_{J,\underline{s}} \in C_J \\ \\ t_1a_1+t_2 a_2+\ldots+t_d a_d \equiv \alpha \mod n \end{array} \right. \\ \\ & \Longleftrightarrow \left\{ \begin{array}{ll} (a_1,\ldots,a_d) -nv_{J,\underline{s}} \in C_J \\ \\ t_1(a_1-n(v_{J,\underline{s}})_1)+\ldots+t_d (a_d -n(v_{J,\underline{s}})_d) \equiv \alpha \mod n \end{array} \right. \\ \\ & \Longleftrightarrow (a_1,\ldots,a_d) - nv_{J,\underline{s}} \in C_J \cap \mathcal{A}^{(\alpha)} \end{align*}
Since the one described is a one-to-one correspondence between points in the two sets, our claim is proved. In particular, $\left|\left(C_J + nv_{J,\underline{s}}\right) \cap \mathcal{A}^{(\alpha)} \right|$ is independent of $\underline{s}$. To explicitly express the cardinality of these sets, we note that \begin{align*} &C_J \cap \mathcal{A}^{(\alpha)} = \{(a_1,\ldots,a_d) \in C_J : t_1a_1+t_2a_2 + \ldots + t_da_d \equiv \alpha \mod n\} \\ \\ & =\{(a_1,\ldots,a_d) : \sum_{\ell \in J} t_\ell a_\ell \equiv \alpha -\sum_{\ell \notin J} t_\ell a_\ell \mod n, 0 \leq a_\ell \leq n-1 \mbox{ if } \ell \in J, 0 \leq a_\ell \leq r_e-1 \mbox{ if } \ell \notin J\} \\ \\ &= \bigsqcup_{\tiny{\begin{array}{c} 0 \leq a_\ell \leq r_e-1 \\ \ell \notin J\end{array}}} \{(a_{j_1},\ldots,a_{j_i}) \in ([0,n-1]\cap\mathbb{N})^i : \sum_{\ell \in J} t_\ell a_\ell \equiv \alpha -\sum_{\ell \notin J} t_\ell a_\ell \mod n\}. \end{align*} Observe that, for a given choice of a $(d-i)$-uple $(a_\ell)_{\ell \notin J}$, the congruence $\sum_{\ell \in J} t_\ell a_\ell \equiv \alpha -\sum_{\ell \notin J} t_\ell a_\ell$ modulo $n$ has a solution $(a_{j_1},\ldots,a_{j_i})$ if and only if $g_J$ divides $\alpha-\sum_{\ell \notin J} a_\ell t_\ell$. In turn, this happens if and only if $(a_\ell)_{\ell \notin J} \in \mathcal{B}_J^{(\alpha)}$, as defined in Notation \ref{notation_psi}. For every such $(a_\ell)_{\ell \notin J} \in \mathcal{B}_J^{(\alpha)}$, we have $g_Jn^{i-1}$ incongruent solution, by Lemma \ref{idolo}. Summing up, we have \begin{align*}
\displaystyle |C_J \cap \mathcal{A}^{(\alpha)}| & = \sum_{\tiny{\begin{array}{c} \ell \notin J\\ (a_\ell) \in \mathcal{B}_J^{(\alpha)} \end{array}}} \left|\left\{(a_{j_1},\ldots,a_{j_i}) \in ([0,n-1] \cap \mathbb{N})^i : \sum_{\ell \in J} t_\ell a_\ell \equiv \alpha-\sum_{\ell \notin J} t_\ell a_\ell \mod n\right\}\right| \\ \\
& = \theta_J^{(\alpha)}(e) \cdot \left|\left\{(a_{j_1},\ldots,a_{j_i}) \in ([0,n-1] \cap \mathbb{N})^i : \sum_{\ell \in J} t_\ell a_\ell \equiv \alpha -\sum_{\ell \notin J} t_\ell a_\ell \mod n \right\}\right| \\ & = \theta_J^{(\alpha)}(e)\cdot g_Jn^{i-1}. \end{align*} \end{proof} \begin{Remark} To illustrate the statement of Proposition \ref{counting C_J}, we refer to the specific example of Figure \ref{figure}, in the case $t_1=1$, $t_2=2$, and $\alpha=0$. \begin{figure}\label{figure2}
\end{figure}
The points inside $(C_J + v_{J,\underline{s}})\cap \mathcal{A}^{(0)} $ are depicted as red stars. Observe that, as stated in Proposition \ref{counting C_J}, the number of red stars contained in each $(C_J + v_{J,\underline{s}})\cap \mathcal{A}^{(0)}$ is the same for every fixed $J$. For example, if $J=[2]$, there are $\theta_{J}^{(0)}(3)\cdot g_J \cdot 3^{2-1} = 3$ red stars in each region. \end{Remark}
\subsection{F-signature function of cyclic quotient singularities} Let $S=\Bbbk\ps{x_1,\ldots,x_d}$, where $\Bbbk$ is an algebraically closed field of characteristic $p>0$. Let $G$ be a finite small cyclic group of order $n$, with $p$ that does not divide $n$, and $R = S^G$ be the ring of invariants under the action of $G$. Given that $\rank_R(M_\alpha) = 1$ for all $0 \leq \alpha \leq n-1$ by Remark \ref{Remark_irreps_cyclic}, Theorem \ref{theorem-Fsignaturefunctionquotient} allows us to write the multiplicity functions as follows: \[ \displaystyle \mult(M_\alpha,R^{1/p^e}) = \frac{p^{de}}{n} + \varphi_{d-2}^{(\alpha)} p^{(d-2)e} + \ldots + \varphi_1^{(\alpha)} p^e + \varphi_0^{(\alpha)}, \] where the functions $\varphi_c$ are bounded and periodic. The main goal of this section is to give a more explicit description of the functions $\varphi_c$ in case $G$ is cyclic. To achieve this goal, we combine the results we obtained in Section \ref{Section_quotient_sing} and Subsection \ref{Subsection_congruence}. \begin{Theorem}\label{theorem-Fsignaturecyclic} Let $S=\Bbbk\ps{x_1,\ldots,x_d}$, where $\Bbbk$ is an algebraically closed field of characteristic $p>0$. Let $G$ be a finite small cyclic group of order $n$, with $p$ that does not divide $n$, and $R = S^G$ be a $\frac{1}{n}(t_1,\ldots,t_d)$ cyclic quotient singularity. For all $e\in\mathbb{N}$, write $p^e=kn+r_e$, where $0 < r_e < n$. With the notation introduced in \ref{notation_psi}, for $e \in \mathbb{N}$ we have \[ \displaystyle \varphi_c^{(\alpha)}(e) = \frac{1}{n} \left[\sum_{i=c}^d (-1)^{i-c}{i \choose c}\psi_i^{(\alpha)}r_e^{i-c}\right]. \] \end{Theorem} \begin{proof} Combining Proposition \ref{prop-Fsignatureofcyclic}, Lemma \ref{partition} and Proposition \ref{counting C_J} we see that \begin{align*}
\displaystyle \mult(M_\alpha,R^{1/p^e}) & = |[0,p^e-1]^d \cap \mathcal{A}^{(\alpha)}| \\ \\
& = \left| \bigsqcup_{i=0}^d \bigsqcup_{J \in \Gamma_i} \left(\bigsqcup_{\tiny{ \underline{s} \in ([0,k-1] \cap \mathbb{N})^i}} ((C_J + nv_{J,\underline{s}}) \cap \mathcal{A}^{(\alpha)}) \right)\right| \\ \\
& = \sum_{i=0}^d \sum_{J \in \Gamma_i} \left(\sum_{\tiny{ \underline{s} \in ([0,k-1] \cap \mathbb{N})^i}} |(C_J + nv_{J,\underline{s}}) \cap \mathcal{A}^{(\alpha)}| \right) \\
& = \sum_{i=0}^d \sum_{J \in \Gamma_i} k^i|C_J \cap \mathcal{A}^{(\alpha)}| & \mbox{ by Proposition \ref{counting C_J}} \\ \\ & = \sum_{i=0}^d \sum_{J \in \Gamma_i} k^i\theta_J^{(\alpha)}g_Jn^{i-1} & \mbox{ by Proposition \ref{counting C_J}} \\ \\ & = \sum_{i=0}^d k^in^{i-1} \psi_i^{(\alpha)}. \end{align*} Now recall that $k=\frac{p^e-r_e}{n}$, so that $k^i = \sum_{c=0}^i (-1)^{i-c}{i \choose c} p^{ce}r_e^{i-c}$. Substituting this into the formula gives \begin{align*} \displaystyle \mult(M_\alpha,R^{1/p^e}) & = \sum_{i=0}^d \left(\frac{p^e-r_e}{n}\right)^i n^{i-1} \psi_i^{(\alpha)}\\ \\ & = \frac{1}{n}\sum_{i=0}^d\sum_{c=0}^i(-1)^{i-c}{i \choose c}\psi_i^{(\alpha)}r_e^{i-c}p^{ce} \\ \\ & = \sum_{c=0}^d\frac{1}{n}\left[\sum_{i=c}^d (-1)^{i-c}{i \choose c}\psi_i^{(\alpha)}r_e^{i-c} \right]p^{ce} & \mbox{ changing the order of the sum}. \end{align*} From this expression, if follows that $\varphi_c^{(\alpha)} = \frac{1}{n}\left[\sum_{i=c}^d (-1)^{i-c}{i \choose c}\psi_i^{(\alpha)}r_e^{i-c} \right]$, as desired. \end{proof}
More generally, we have seen in Section \ref{Section_quotient_sing} that the existence of $c$-pseudoreflections inside $G$ determines the vanishing of the higher coefficients of $\mult(M_\alpha,R^{1/p^e})$. In the case of cyclic quotient singularities, we can relate this fact to the values $g_J$, for $J \in \Gamma_c$. We prove this fact in Theorem \ref{theorem-peggiodelcasomonomiale}. Before that, we need the following lemma which follows from well-known identities between binomial coefficients.
\begin{Lemma} \label{lemma2} Given integers $0 \leq c \leq d-1$, we have \[ \displaystyle \sum_{i=c}^d (-1)^{i-c} {d \choose i}{i \choose c} = 0. \] \end{Lemma}
The following Theorem can be viewed as an improvement of Proposition \ref{prop-Fsignaturefuncionquotient}. Recall that $\theta_J$, $\varphi_i$ and $\psi_i$ denote the functions $\theta_J^{(0)}$, $\varphi_i^{(0)}$ and $\psi_i^{(0)}$ , respectively.
\begin{Theorem}\label{theorem-peggiodelcasomonomiale} With the notation of Theorem \ref{theorem-Fsignaturecyclic}, consider an integer $1 \leq c \leq d-1$. Then the functions $\varphi_{d-1},\ldots,\varphi_{c}$ are identically zero if and only if $g_J = 1$ for all $J \in \Gamma_{c}$. Moreover, if $\varphi_\ell$ is the first non-vanishing coefficient with $0 \leq \ell < d$, and $p^e=kn+r_e$, then \[ \displaystyle \varphi_\ell = \frac{-{d \choose \ell}r_e^{d-\ell} + \psi_\ell}{n}. \] \end{Theorem}
\begin{proof} Assume that $g_J = 1$ for all $J \in \Gamma_{c}$. This implies that $g_J=1$ for all $J \in \Gamma_i$ and $c \leq i \leq d$. It is then easy to see that there are no $i$-pseudoreflections for all $c \leq i \leq d-1$. By Theorem \ref{theorem-Fsignaturefunctionquotient} we conclude that $\varphi_i=0$ for all $c \leq i \leq d-1$.
We now prove the converse. Fix $e\in\mathbb{N}$ such that $r_e=1$. For such a value of $e$, by Theorem \ref{theorem-Fsignaturecyclic} we can express all the coefficients $\varphi_c$ as follows: \begin{align*} \displaystyle \varphi_{c}(e) & = \frac{1}{n} \left[\sum_{i=c}^d (-1)^{i-c}{i \choose c}\psi_i(e)\right]. \end{align*} In addition, again because $r_e=1$, Remark \ref{rem_psi} gives that $\theta_J(e) = 1$ for all $J \subseteq [d]$ and $e \in \mathbb{N}$. It follows that, $\psi_i(e)= \sum_{J \in \Gamma_i} g_J$ for all $0 \leq i \leq d$.
Observe that, for every $i$, we have $|\Gamma_i| = {d \choose i}$, and $g_J \geq 1$ for all $J \in \Gamma_i$. Therefore, we always have an inequality $\psi_i \geq {d \choose i}$, with equality that holds if and only if $g_J=1$ for all $J \in \Gamma_i$.
Note that $g_J=1$ for all $J \in \Gamma_{d-1}$, since $G$ is assumed to be small. This will be the base case of our induction. Now let $d-2 \geq c \geq 1$, and assume that $g_J = 1$ for all $J \in \Gamma_{c+1}$. Our previous observation implies that $\psi_i={d \choose i}$ for all $i \geq c+1$. The formula for $\varphi_c$ now gives \begin{align*} \displaystyle \varphi_{c}(e) & = \frac{1}{n}\left[\sum_{i=c}^d (-1)^{i-c}{i \choose c}\psi_i\right] \\ \\ & = \frac{1}{n}\left[\sum_{i=c+1}^d (-1)^{i-c}{i \choose c}{d \choose i} + \psi_{c} \right] \\ \\ & = \frac{1}{n} \left[- {d \choose c}+\psi_{c}\right] & \mbox{ by Lemma \ref{lemma2}.} \end{align*} Since $\varphi_{c}(e)=0$ by assumption, we conclude that $\psi_{c} = {d \choose c}$ and, using again the observation made above, we conclude that $g_J=1$ for all $J \in \Gamma_c$, as desired.
For the last part of the theorem, let $\varphi_\ell$ be the first non-zero coefficient, with $0 \leq \ell <d$. By what shown above, we have that $g_J = 1$ for all $J \in \Gamma_{\ell+1}$, and then $\theta_J(e) = r_e^{d-i}$ for all $J \in \Gamma_i$ with $d \geq i \geq \ell+1$ and $e \in \mathbb{N}$, by Remark \ref{rem_psi}. It follows that $\psi_i = {d \choose i}r_e^{d-i}$, again for $d \geq i \geq \ell+1$ and $e \in \mathbb{N}$. By Theorem \ref{theorem-Fsignaturecyclic}, for all $e \in \mathbb{N}$ we finally have that \begin{align*} \displaystyle \varphi_{\ell}(e) & = \frac{1}{n}\left[\sum_{i=\ell}^d (-1)^{i-\ell}{i \choose c}\psi_i\right] \\ \\ & = \frac{1}{n}\left[\sum_{i=\ell+1}^d (-1)^{i-\ell}{i \choose \ell}{d \choose i}r_e^{d-i}r_e^{i-\ell} + \psi_{\ell} \right] \\ \\ & = \frac{1}{n}\left[r_e^{d-\ell}\sum_{i=\ell+1}^d (-1)^{i-\ell}{i \choose \ell}{d \choose i} + \psi_{\ell} \right] \\ \\ & = \frac{1}{n} \left[- {d \choose \ell}r_e^{d-\ell}+\psi_{\ell}\right] & \mbox{ by Lemma \ref{lemma2}.} \end{align*} \end{proof}
Proposition \ref{prop-Fsignaturefuncionquotient} shows that $\varphi_c=0$ for some $0 \leq c \leq d-2$ implies that $G$ contains no $c$-pseudoreflections. However, it is not true that if $\varphi_c=0$ for one single such $c$, then $g_J=1$ for all $J\in \Gamma_c$. Consider the following example.
\begin{Example} \label{ex-pseudoreflectionsVSgJ=1} We fix $n=st$, where $s,t>1$ are integers, and consider the $\frac{1}{n}(1,1,t,t)$-cyclic singularity $R$ over an algebraically closed field $\Bbbk$ of characteristic $p\nmid n$. Clearly, we have $g_{\{t\}}=t>1$. We show that the coefficient $\varphi_1$ of $p^e$ in the F-signature function of $R$ is $0$. This also follows from Theorem \ref{theorem-Fsignaturefunctionquotient}, since the cyclic group $G=\frac{1}{n}(1,1,t,t)$ does not contain $1$-pseudoreflections.
We show it using the formula
\[
\displaystyle \varphi_1 = \frac{1}{n} \left[\sum_{i=1}^d (-1)^{i-1}{i \choose 1}\psi_ir_e^{i-1}\right]
\]
given by Theorem \ref{theorem-Fsignaturecyclic}. Let $p^e=kn+r_e$, with $0< r_e<n$. Since $g_{J}=1$ for $J\in\Gamma_4$ and $J\in\Gamma_3$, we have $\psi_4=1$ and $\psi_3=4r_e$.
For $j=2$, we have $g_{\{1,1\}}=g_{\{1,t\}}=1$, and $\theta_{\{1,1\}}=\theta_{\{1,t\}}=r_e^2$, so $\psi_2=5r_e^3+r_e\theta_{\{t,t\}}g_{\{t,t\}}$.
For $j=1$, we have $g_{\{1\}}=1$, and $\theta_{\{1\}}=r_e^3$, so $\psi_1=2r_e^3+2\theta_{\{t\}}g_{\{t\}}$.
Now, observe that $\theta_{\{t\}}$ counts the number of triples $a_1,a_2,a_3\in\{0,\dots,r_e-1\}$ such that $a_1+a_2+ta_3\equiv 0$ modulo $g_{\{t\}}=t$. This is $r_e$-times the number of couples $a_1,a_2\in\{0,\dots,r_e-1\}$ such that $a_1+a_2\equiv 0$ modulo $t=g_{\{t,t\}}$, that is, $\theta_{\{t,t\}}$. Thus, we obtain $\theta_{\{t\}}=r_e\theta_{\{t,t\}}$. Finally, the coefficient of $p^e$ in the F-signature function of $R$ is
\[\begin{split}
\varphi_1 &= \frac{1}{n} \left[\sum_{i=1}^d (-1)^{i-1}{i \choose 1}\psi_ir_e^{i-1}\right]=\frac{1}{n}\left[-4r_e^3+12r_e^3-10r_e^3-2r_e\theta_{\{t,t\}}t+2r_e^3+2\theta_{\{t\}}t\right]\\
&= \frac{1}{n}\left[-2r_e\theta_{\{t,t\}}+2r_e\theta_{\{t,t\}}\right]=0.
\end{split}
\]
However, notice that in this case $\varphi_2\neq0$, as $G$ contains a $2$-pseudoreflection.
For example, choose $n=6$, $t=3$, and $p\equiv 1$ modulo $6$, so that $r_e=1$ for all $e\in\mathbb{N}$. Then the F-signature function of the $\frac{1}{6}(1,1,3,3)$-singularity is the polynomial in $p^e$ given by $FS(e)=\frac{1}{6}p^{4e}+\frac{1}{3}p^{2e}+\frac{1}{2}$.
\end{Example}
Theorem \ref{theorem-peggiodelcasomonomiale} relates the vanishing of the coefficients $\varphi_c=\varphi_c^{(0)}$ of $\mult(R,R^{1/p^e})$ to the invariants $g_J$ of the group $G$. Since Theorem \ref{theorem-Fsignaturecyclic} gives analogous formulas for $\mult(M_\alpha,R^{1/p^e})$ when $\alpha \ne 0$, one may expect that similar considerations about the vanishing coefficients $\varphi_c^{(\alpha)}$ may hold true. It turns out that the vanishing of a coefficient $\varphi_c^{(\alpha)}$ for $\alpha \ne 0$ is, in general a weaker condition than the vanishing of $\varphi_c^{(0)}$. In fact, even the vanishing of all the coefficients $\varphi_i^{(\alpha)}$ for $c \leq i \leq d-1$ does not imply that $G$ has no $c$-pseudoreflections.
To better illustrate what can be said in this direction, consider the following conditions: \begin{enumerate} \item $g_J = 1$ for all $J \in \Gamma_c$. \item $G$ does not have any $i$-pseudoreflections (that is, $G_i=\emptyset$) for all $c \leq i \leq d-1$. \item The function $\varphi_i^{(0)}$ is identically zero for all $c \leq i \leq d-1$. \item The function $\varphi_i^{(\alpha)}$ is identically zero for all $c \leq i \leq d-1$ and all $0 \leq \alpha \leq n-1$. \item The function $\varphi_i^{(\alpha)}$ is identically zero for all $c \leq i \leq d-1$ and some $0 \leq \alpha \leq n-1$. \end{enumerate}
Our previous results show that the first four conditions are equivalent, and clearly (4) implies (5). However, (5) does not imply (1) -- (4), as the following example shows. \begin{Example} \label{ex-vanishingalpha} Let $R$ be the $\frac{1}{6}(1,2,3)$ cyclic quotient singularity over an algebraically closed field $\Bbbk$ of characteristic $p \equiv 1$ modulo $6$. Observe that $r_e=1$ for all $e\in\mathbb{N}$, hence the functions $e \mapsto \mult(M_\alpha,R^{1/p^e})$ will actually be polynomials in $p^e$. Using Theorem \ref{theorem-Fsignaturecyclic}, one can compute \[ \displaystyle \mult(M_\alpha,R^{1/p^e}) = \left\{\begin{array}{ll} \frac{p^{3e}}{6} + \frac{p^e}{2} + \frac{1}{3} & \mbox{ if } \alpha=0 \\ \\ \frac{p^{3e}}{6} - \frac{p^e}{3} + \frac{1}{6} & \mbox{ if } \alpha=1,5 \\ \\ \frac{p^{3e}}{6} - \frac{1}{6} & \mbox{ if } \alpha=2, 4 \\ \\ \frac{p^{3e}}{6} + \frac{p^e}{6} - \frac{1}{3} & \mbox{ if } \alpha=3 \end{array} \right. \] In particular, since $\varphi_1^{(2)} = \varphi_2^{(2)}=0$ but $\varphi_1^{(0)} \ne 0$, this shows that (5) does not imply (3). \end{Example}
\begin{Corollary} \label{coroll_gcd1} Let $R$ be a $\frac{1}{n}(t_1,\ldots,t_d)$-cyclic singularity over an algebraically closed field of characteristic $p>0$. If $g_J = \gcd(t_\ell,n) = 1$ for all $1 \leq \ell \leq d$ then, with $p^e=kn+r_e$, the multiplicity functions $e \mapsto \mult(M_\alpha,R^{1/p^e})$ can be written in the form \[ \displaystyle \mult(M_\alpha, R^{1/p^e}) = \frac{p^{de}-r_e^d}{n} + \theta_{\emptyset}^{(\alpha)}(e), \]
where $\theta_{\emptyset}^{(\alpha)}(e)=\left|\left\{(a_1,\dots,a_d)\in([0,r_e-1] \cap \mathbb{N})^d: \ t_1a_1+\cdots+t_da_d \equiv \alpha \mod n \right\}\right|$. In particular, this applies to the case of a Veronese rings, which correspond to the choice $t_\ell=1$ for all $\ell$. \end{Corollary} \begin{Remark} As already noted in Remark \ref{rem_graded} for the results of Section \ref{Section_quotient_sing}, analogous versions of Theorems \ref{theorem-Fsignaturecyclic} and \ref{theorem-peggiodelcasomonomiale}, as well as of Corollary \ref{coroll_gcd1}, hold in the graded setup. \end{Remark}
\section{Examples}\label{section:examples} In this section, we present several examples in order to show how our results can be used to compute the F-signature function of specific quotient singularities.
\begin{Example}[singularity $E_6$] \label{Ex_E6}
Let $\Bbbk$ be an algebraically closed field with $\chara\Bbbk=p\neq2,3$. The binary tetrahedral group $BT$ of $\Bbbk$ is the subgroup of $\mathrm{Sl}(2,\Bbbk)$ of order $24$ generated by the matrices
\begin{equation*}
A=\begin{pmatrix}
i_{\Bbbk} & 0 \\ 0 & i_{\Bbbk}^3
\end{pmatrix}, \
B=\begin{pmatrix}
0 & i_{\Bbbk} \\ i_{\Bbbk} & 0
\end{pmatrix}, \
C=\frac{1}{\sqrt 2}\begin{pmatrix}
\xi_{\Bbbk} & \xi_{\Bbbk}^3 \\ \xi_{\Bbbk} & \xi_{\Bbbk}^7
\end{pmatrix},
\end{equation*}
where $\sqrt 2$ denotes a square root of $2$ in $\Bbbk$, $i_{\Bbbk}$ is a primitive $4$-th root of $1$, and $\xi_{\Bbbk}$ is a primitive $8$-th root of $1$.
The quotient singularity $R=\Bbbk\llbracket u,v \rrbracket^{BT}$ is called $E_6$ singularity, and is isomorphic to the hypersurface $\Bbbk\llbracket x,y,z \rrbracket/(x^2+y^3+z^4)$.
We compute its signature function using Theorem \ref{theorem-Fsignaturefunctionquotient}.
Fix $e\in\mathbb{N}$.
The group $BT$ consists of one $2$-pseudoreflection (the identity matrix $I$) and $23$ $0$-pseudoreflections.
Therefore, we only need to compute
\begin{equation}\label{eq-phi0singularityE6}
\varphi_0(e)=\frac{1}{24}\sum_{g\neq I}\sum_{0\leq a,b<p^e}(\xi_{g,e,1})^{a}(\xi_{g,e,2})^{b},
\end{equation}
where $\xi_{g,e,i}=\phi((\lambda_{g,i})^{1/p^e})\in\mathbb{C}$ and $\lambda_{g,1},\lambda_{g,2}\in\Bbbk$ are the eigenvalues of $g\in BT$.
Now, observe that
\begin{enumerate}
\item since $BT\subseteq\mathrm{Sl}(2,\Bbbk)$ we have $\lambda_{g,2}=\lambda_{g,1}^{-1}$ for all $g\in BT$;
\item two conjugate matrices have the same eigenvalues;
\item $(-)^{1/p^e}$ and $\phi$ are group homomorphisms, therefore $\xi_{g,e,i}$ is a root of unity in $\mathbb{C}$ of the same order of $\lambda_{g,i}$ which is the same order of $g$.
\end{enumerate}
So we can split the sum over the elements of the group in \eqref{eq-phi0singularityE6} by conjugacy classes.
In particular, $BT$ has one element conjugate to $-I$, $6$ elements conjugate to $B$, $4$ elements conjugate to $C$, $4$ to $C^2$, $4$ to $C^4$, and $4$ to $C^5$.
Thus, we can rewrite \eqref{eq-phi0singularityE6} as
\begin{equation*}
\varphi_0(e)=\frac{1}{24}\sum_{0\leq a,b<p^e}\left((-1)^a(-1)^b+6i^a(-i)^b+8\eta^a\eta^{-b}+8\eta^{2a}\eta^{-2b}\right),
\end{equation*}
where $i\in\mathbb{C}$ and $\eta\in\mathbb{C}$ is a primitive $6$-th root of $1$.
Notice that $\varphi_0(e)=\varphi_0(e_1)$ if $p^e\equiv p^{e_1}$ modulo $12$.
Since $\gcd(p,24)=1$, the only possible values of $p^e$ modulo $12$ are $1,5,7$ and $11$.
It is straightforward to check that for these values we have always $\varphi_0(e)=\frac{23}{24}$.
Therefore, the F-signature function of the $E_6$ singularity is
\begin{equation*}
FS(e)=\frac{1}{24}p^{2e}+\frac{23}{24},
\end{equation*}
in accordance with Brinkmann's result \cite{Brinkmann}.
In a similar way, one may compute the F-signature function of the quotient singularities $E_7$ and $E_8$. \end{Example}
\begin{Example}[$3$-rd Veronese subring of the singularity $D_4$] \label{Ex_3-VeroneseD6}
Let $\Bbbk$ be an algebraically closed field with $\chara\Bbbk=p\neq2,3$.
We consider the group $G$ obtained as extension of the binary dihedral group $BD_2$ generated by the matrices $A$ and $B$ of Example \ref{Ex_E6} by the cyclic group $C_3$ of order $3$ generated by the matrix $\mathrm{diag}(\omega_{\Bbbk},\omega_{\Bbbk})$, where $\omega_{\Bbbk}\in\Bbbk$ is a primitive $3$-rd root of unity.
In other words, we have a short exact sequence of finite groups
\begin{equation*}
1\rightarrow BD_2\rightarrow G\rightarrow C_3\rightarrow 1.
\end{equation*}
We can describe this group as $G=\{M\cdot N:\ M\in BD_2, \ N\in C_3\}$.
In particular, it follows that since $BD_2$ has order $8$, $G$ has order $24$.
Notice, however that $G$ is not isomorphic to the group $BT$ of Example \ref{Ex_E6}, since for example it contains an element of order $12$, while $BT$ does not.
The corresponding quotient singularity $R=\Bbbk\llbracket u,v \rrbracket^{G}\cong(\Bbbk\llbracket u,v\rrbracket^{BD_2})^{C_3}$ can be seen as a $3$-rd Veronese subring of the Kleinian singularity $D_4$.
More explicitly, we have $R=\Bbbk\llbracket u^{12} + v^{12}, u^6v^6, u^{15}v^3 - u^3v^{15}\rrbracket$.
We compute the F-signature function of $R$ using Theorem \ref{theorem-Fsignaturefunctionquotient}.
We proceed as in Example \ref{Ex_E6}, and we obtain that if $ p\equiv 1\mod 6$ then
\begin{equation*}
FS(e)=\frac{1}{24}p^{2e}+\frac{23}{24},
\end{equation*}
and if $ p\equiv 5\mod 6$ then
\begin{equation*}
FS(e)=\begin{cases}
\frac{1}{24}p^{2e}+\frac{23}{24} \ \text{ for } e \text{ even},\\ \\
\frac{1}{24}p^{2e}-\frac{1}{24} \ \text{ for } e \text{ odd}.
\end{cases}
\end{equation*} \end{Example}
The following three examples are explicit applications of Corollary \ref{coroll_gcd1}.
\begin{Example}[$2$-dimensional Veronese ring]\label{Ex_2Veronese} Let $R=\Bbbk\ps{ x^n,x^{n-1}y,\cdots,xy^{n-1},y^n}$ be the $2$-dimensional $n$-th Veronese ring, with $n\geq 2$. In the notation of Section \ref{section:cyclicquotient}, this corresponds to the $\frac{1}{n}(1,1)$ cyclic quotient singularity. By direct computation, one can see that \[
\theta_{\emptyset}^{(0)}(e)=\left|\left\{(a,b)\in([0,r_q-1] \cap \mathbb{N})^2: \ a+b \equiv 0 \mod n \right\}\right|=\max\{1,2r_e-n+1\}. \] By Corollary \ref{coroll_gcd1}, the F-signature function of $R$ is then \[ FS(e)=\frac{p^{2e}-r_e^2}{n}+\max\{1,2r_e-n+1\}. \] \end{Example}
\begin{Example}[singularity $A_{n-1}$] \label{Ex_A_{n-1}} For $n \geq 2$, let $R=\Bbbk\ps{x^n,xy,y^n}$ be a $2$-dimensional $A_{n-1}$-type singularity. In our notation, this corresponds to the $\frac{1}{n}(1,n-1)$ cyclic quotient singularity. In this case, we have \[
\theta_{\emptyset}^{(0)}(e)=\left|\left\{(a,b)\in([0,r_e-1] \cap \mathbb{N})^2: \ a-b \equiv 0 \mod n \right\}\right|=r_e. \] It follows from Corollary \ref{coroll_gcd1} that the F-signature function of $R$ is \[ FS(e)=\frac{p^{2e}-r_e^2}{n}+r_e. \] This is in accordance with \cite{Brinkmann} (see also \cite[Example 4.3]{F-finExc}). \end{Example}
\begin{Example}[$3$-dimensional Veronese ring]\label{Ex_3Veronese} Let $R$ be the $3$-dimensional $n$-th Veronese ring, with $n\geq 2$. This corresponds to the $\frac{1}{n}(1,1,1)$ cyclic quotient singularity. We have \[
\theta_{\emptyset}^{(0)}(e)=\left|\left\{(a,b,c)\in([0,r_e-1] \cap \mathbb{N})^3: \ a+b+c \equiv 0 \mod n \right\}\right|=\left|A_0\right|+\left|A_1\right|+\left|A_2\right|, \] where $A_i=\left\{(a,b,c)\in([0,r_e-1] \cap \mathbb{N})^3: \ a+b+c =i\cdot n \right\}$ for $i=0,1,2$. We have $A_0=\{(0,0,0)\}$, and for $i=1,2$, $A_i\neq\emptyset$ if and only if $3(r_e-1)\geq i\cdot n$.
\\ We assume $3(r_e-1)\geq n$, and we compute $|A_1|$. The number $|A_1|$ is equal to the number of ways we can place $3(r_e-1)-n$ objects in $3$ boxes, where each box can contain at most $r_e-1$ objects. If $0\leq 3(r_e-1)-n\leq r_e-1$, this number is $|A_1|={{3r_e-n-1}\choose{2}}$. If $r_e\leq3(r_e-1)-n\leq 2(r_e-1)$, it is $|A_1|={{3r_e-n-1}\choose{2}}-3{{2r_e-n-1}\choose{2}}$, where we have to subtract configurations where we put more than $r_e-1$ objects in one box. We cannot have configurations where two boxes contain more than $r_e-1$ objects, since this would imply $3(r_e-1)-n\geq 2(r_e-1)+1$, which is equivalent to $r_e-2\geq n$; a contradiction, since $r_e<n$. A similar reasoning yields $|A_2|={{3r_e-2n-1}\choose{2}}$ for $3(r_e-1)\geq 2n$. Note that, in this case, there are no configurations with more than $r_e-1$ objects in one box, since this would mean that $3(r_e-1)-2n\geq r_e$, which is equivalent to $2r_e-2n-3\geq0$, again contradicting that $r_e<n$. \\ Therefore, it follows from Corollary \ref{coroll_gcd1} that the F-signature function of $R$ is \[ FS(e)=\frac{p^{3e}-r_e^3}{n}+1+{{3r_e-n-1}\choose{2}}-3{{2r_e-n-1}\choose{2}}+{{3r_e-2n-1}\choose{2}}, \] with the convention that a binomial coefficient ${{u}\choose{2}}=0$ whenever $u<2$. \end{Example} Our results allow us to compute several of examples of interest, such as the examples of Iyama and Yoshino from \cite{IyamaYoshino}.
\begin{Example}[Iyama-Yoshino's singularities] If $R$ is the $\frac{1}{3}(1,1,1)$ cyclic quotient singularity, then by Example \ref{Ex_3Veronese} the F-signature function of $R$ is \[ FS(e) =
\frac{p^{3e}-1}{3} + 1 \] if $p\equiv1\mod 3$, and \[ FS(e) = \left\{ \begin{array}{ll} \frac{p^{3e}-1}{3} + 1 & \mbox{ for } e \text{ even } \\ \\ \frac{p^{3e}-8}{3} + 1 & \mbox{ for } e \text{ odd} \end{array} \right. \] if $p\equiv2\mod 3$. On the other hand, for the cyclic quotient singularity $\frac{1}{2}(1,1,1,1)$, it follows from Corollary \ref{coroll_gcd1} and the fact that $r_e=1$ for all $e$ that the F-signature function is \[ \displaystyle FS(e) = \frac{p^{4e}-1}{2} + 1. \] \end{Example}
\par We conclude the paper providing one final example, the Klein four group embedded in ${\rm SL}(3,\Bbbk)$. For the F-signature function of its ring of invariants, that turns out being rather easy to compute with our techniques, we see an example where the last coefficient $\varphi_0$ can be zero. \begin{Example}[Klein four group] Let $\Bbbk$ be a field of prime characteristic $p\geq 3$. The Klein four group $\mathbb{Z}/(2) \times \mathbb{Z}/(2)$ can be realized as a subgroup of ${\rm SL}(3,\Bbbk)$ with no $2$-pseudoreflections as follows: \[G= \displaystyle \left\{\left(\begin{matrix} 1 && \\ &1& \\ &&1 \end{matrix}\right), \left(\begin{matrix} 1 && \\ &-1& \\ &&-1 \end{matrix}\right),\left(\begin{matrix} -1 && \\ &1& \\ &&-1 \end{matrix}\right),\ \left(\begin{matrix} -1 && \\ &-1& \\ &&1 \end{matrix}\right)\right\}, \] where the entries which are not listed should be treated as zeros. It can be shown that the ring of invariants under this action of $G$ is isomorphic to $\Bbbk\ps{x^2,y^2,z^2,xyz}$. By Theorem~\ref{theorem-Fsignaturefunctionquotient}, because there are no $0$-pseudoreflections, the only coefficient in the F-signature function that we have to determine is $\varphi_1$. A straightforward computation gives $\varphi_1(e) = 3$ for all $e$, therefore \[ \displaystyle FS(e) = \frac{p^{3e}}{4} + \frac{3p^e}{4}. \] \end{Example} \section*{Acknowledgments} Part of this work was realized when the authors were guests at the Universit\'{e} de Neuch\^{a}tel, and at the Royal Institute of Technology (KTH), in Stockholm. The authors thank these institutions for their hospitality. The authors would like to thank Holger Brenner, Jack Jeffries, Yusuke Nakajima, Anurag Singh, Francesco Strazzanti, Peter Symonds, and Deniz Yesilyurt for many discussions and helpful comments. In particular, we thank them for suggesting several of the examples in Section \ref{section:examples}, and for helping with the related computations.
\end{document} | arXiv |
Medicinal attributes of major phenylpropanoids present in cinnamon
Uma Kant Sharma1,
Amit Kumar Sharma1 &
Abhay K. Pandey1
Excessive production of free radicals has been implicated in many diseases including cancer. They are highly reactive and bring about oxidation of biomolecules i.e., proteins, lipids and nucleic acids which are associated with many degenerative diseases. Natural products acting as antioxidants have ability to neutralize free radicals and their actions and hence they mitigate their harmful effects. The present study was designed to investigate pharmacological properties viz., antioxidant, antibacterial and antiproliferative activities of cinnamaldehyde and eugenol, the two naturally occurring phenylpropanoids present in Cinnamomum spp. and other plants.
The antioxidant potential of test compounds was evaluated by measuring DPPH free radical scavenging, reducing power and metal ion chelating activities. Protection against membrane damage was assayed by inhibition of lipid peroxidation in rat liver homogenate. Antibacterial activity was measured by Kirby-Bauer disc diffusion method while antiproliferative activity of test compounds was measured by sulforhodamine-B (SRB) assay.
Eugenol exhibited noticeable antioxidant potential in DPPH radical scavenging (81 %) and reducing power (1.12) assays at 1.0 μM/ml and 0.1 μM/ml concentrations, respectively. IC50 value of eugenol for radical scavenging activity was found to be 0.495 μM/ml. Cinnamaldehyde demonstrated considerable metal ion chelating ability (75 %) at 50 μM/ml and moderate lipo-protective activity in lipid peroxidation assay at 3 μM/ml. In addition cinnamaldehyde also showed appreciable antibacterial activity (zone of inhibition 32–42 mm) against Bacillus cereus (MTCC 6840), Streptococcus mutans (MTCC 497), Proteus vulgaris (MTCC 7299), Salmonella typhi (MTCC 3917) and Bordetella bronchiseptica (MTCC 6838) while eugenol produced moderate activity at 80 μM/disc. Cinnamaldehyde exhibited comparatively better antiproliferative potential against breast (T47D) and lung (NCI-H322) cancer cell lines than eugenol in SRB assay at 50 μM concentration.
Cinnamaldehyde possessed metal ion chelating, lipo-protective, antibacterial and antiproliferative activities while eugenol showed potent H-atom donating potential indicating radical quenching and reducing power abilities. Medicinal attributes shown by both the compounds indicated their usefulness in food and pharmaceutical sector.
Free radicals are associated with many degenerative diseases including cancer, cardio-vascular diseases, cataract, immune system decline and brain dysfunction [1]. Under normal metabolic conditions about 2–5 % of O2 consumed by mitochondria is converted to ROS (Reactive oxygen species) during metabolic process within the body [2]. Their excessive production during abnormal conditions is regulated naturally by antioxidant system [3]. Failure of antioxidant defenses results in a pathophysiological condition known as oxidative stress. Highly active oxygen and nitrogen species (ROS and RNS) are generally considered to be markers of oxidative stress. They permanently modify the genetic material leading to numerous degenerative or chronic diseases [4]. Mis-repair of DNA damage results in mutations such as base substitution and deletion which lead to carcinogenesis [5].
Antioxidants are a group of substances which are either produced in situ or supplied through food and supplements. They protect free radical mediated membrane damage because of their scavenging and chelating properties [6, 7]. Butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) are commonly used as synthetic antioxidant but their uses are restricted by legislative rules because of doubts over their toxic and carcinogenic effects [8]. Antioxidants derived from plants are presumed to be safe since they are natural in origin and have capability to counteract the damaging effect of ROS [9].
Many microorganisms cause food spoilage which is one of the most important concerns of the food industry [10]. Initially, synthetic chemicals were used to prevent microbial contamination as well as oxidation of dietary components so that they remained in their natural form. Because of the growing concern of consumer about the side effects of synthetic compounds and want of safer material for preventing and controlling pathogenic microorganisms in food, natural products are currently being used to prevent microbial contamination [11]. Phytochemicals have been reported to modulate human metabolism in a manner beneficial for the prevention of infectious and degenerative diseases [12, 13].
Spices and aromatic vegetable material of natural origin are used in food industries to enhance flavor and fragrance qualities of food materials. Moreover they are also used as traditional medicine [14]. Spices are good sources of natural antioxidant and antibacterial agents. Cinnamon and bay leaf are used as spices and obtained from Cinnamomum spp. Principle components like cinnamaldehyde, eugenol, cinnamic acid and cineol etc. are responsible for the antioxidant activity in cinnamon [15]. Very low amount of eugenol is usually present in cinnamon bark but it is the major component of cinnamon leaf essential oil. It is also abundantly present in Syzygium aromaticum (clove). Both cinnamaldehyde and eugenol belong to phenylpropanoid class of phytochemicals. Cinnamaldehyde bears an aldehyde group on benzene ring via three carbon chain while eugenol has one hydroxy and one methoxy group which are directly attached to the ring (Fig. 1). Eugenol has a wide range of application in perfumes, flavorings, essential oils and in medicine as a local antiseptic and anesthetic [16]. Present study reports antioxidant, antibacterial and antiproliferative activities of cinnamaldehyde and eugenol, the two flavoring phenylpropanoid compounds.
Chemical structures of cinnamaldehyde (3-Phenyl-2-propenal) and eugenol (2-Methoxy-4-(2-propenyl) phenol)
Cinnamaldehyde (3-phenyl-2-propenal), eugenol (2-methoxy-4-(2-propenyl) phenol), DPPH (2, 2-diphenyl-1-picryl hydrazyl), tert-butyl-4-hydroxytoluene (BHT), butylated hydroxyanisole (BHA) were obtained from Himedia, Pvt. Ltd Mumbai, India.
Assessment of antioxidant ability by in vitro assays
Free radical (DPPH) scavenging assay
The hydrogen-donating ability of cinnamaldehyde and eugenol was examined in the presence of DPPH, a stable radical using the method of Singh et al. [17]. One ml cinnamaldehyde (0.4–4.0 mM/ml) and eugenol (0.4–4.0 μM/ml) prepared in DMSO were taken in different test tubes and 3 ml of 0.1 mM DPPH solution prepared in methanol was added. The content was mixed and allowed to stand at room temperature for 30 min in dark. Final concentration of cinnamaldehyde and eugenol in reaction mixture was 0.1–1.0 mM/ml and 0.1–1.0 μM/ml, respectively. The reduction of DPPH free radical was measured by recording the absorbance at 517 nm. BHA was used as standard for comparison. Control tubes contained 1 ml DMSO and 3 ml DPPH reagent in reaction mixture. The radical scavenging activities (%) at different concentrations of the test samples were calculated using the following formula.
$$ \mathrm{Free}\ \mathrm{radical}\ \mathrm{scavenging}\ \mathrm{activity}\left(\%\right)=\left[\left({\mathrm{A}}_{\mathrm{C}}-{\mathrm{A}}_{\mathrm{S}}\right)/{\mathrm{A}}_{\mathrm{C}}\right]\times 100 $$
where AC and AS are the absorbance values of the control and the sample, respectively. IC50 value, the concentration of sample exhibiting 50 % free radical scavenging activity, was also determined by non-linear regression analysis using GraphPad Prism software.
Reducing power assay
The reducing power of the sample was determined by the method of Oyaizu [18]. DMSO was used as solvent to make different concentrations of samples. 1.0 ml of sample and standard was placed in different test tubes. To each test tube 2.5 ml of phosphate buffer (0.2 M, pH 6.6) and 2.5 ml of 1 % potassium hexa-cyanoferrate (K3Fe(CN)6) were added and contents were vortexed. Final concentration of cinnamaldehyde, eugenol and ascorbic acid in reaction mixture was 0.02–0.1 μm/ml. Tubes were then incubated at 50 °C in a water bath for 20 min. The reaction was stopped by adding 2.5 ml of 10 % TCA solution and then centrifuged at 4000 rpm for 10 min. After centrifugation 1.0 ml of the supernatant was mixed with 1 ml of distilled water and 0.5 ml of ferric chloride solution (0.1 %, w/v) and kept at room temperature for 2 min. The reaction led to formation of greenish blue colour. The absorbance was measured at 700 nm and higher absorbance values denoted better reducing power of the test samples. Ascorbic acid was used as standard for comparison.
Lipid peroxidation inhibition activity (LPOI)
Lipo-protective efficacy of samples was estimated by the method of Halliwell and Gutteridge [19]. The study was performed in accordance with the Guide for the Care and Use of Laboratory Animals, as promulgated by CPCSEA India and adopted by Institutional Animal Ethics Committee, University of Allahabad, Allahabad. The liver tissue was isolated from normal albino Wistar rats and 10 % (w/v) homogenate was prepared in phosphate buffer (0.1 M, pH 7.4 having 0.15 M KCl) using homogenizer. The homogenate was centrifuged at 800 g for 15 min and clear cell free supernatant was used for in vitro lipid peroxidation inhibition assay. 100 μl of different concentrations of samples was taken in different tubes, followed by addition of 1.0 ml KCl (0.15 M), 0.3 ml phosphate buffer and 0.5 ml of tissue homogenate. Peroxidation was initiated by adding 100 μl FeCl3 (0.2 mM). Final concentration of cinnamaldehyde, eugenol and BHA (standard) in reaction mixture was 1.0–3.0 μM/ml. After incubation at 37 °C for 30 min, lipid peroxidation was monitored by the formation of thiobarbituric acid reactive substances which were estimated by adding 2 ml of ice-cold hydrochloric acid (0.25 N) containing 15 % TCA, 38 % TBA and 0.5 % BHT. The reaction mixture was incubated at 80 °C for 1 h followed by cooling and centrifugation. The absorbance of the pink supernatant was measured at 532 nm. All analyses were carried out in triplicate and results were expressed as mean ± SD. The protective effect of extracts against lipid peroxidation (% LPOI) was calculated by using the following formula.
$$ \mathrm{LPOI}\left(\%\right)=\left[\left(\mathrm{A}\mathrm{c}-\mathrm{A}\mathrm{s}\right)/\mathrm{A}\mathrm{c}\right]\times 100 $$
where Ac is absorbance of control and As is absorbance in the presence of the sample or standard compounds.
Metal ion chelating activity
The chelating activity of the cinnamaldehyde and eugenol was measured by the method of Dinis et al. [20]. Samples (200 μl) containing different concentrations were prepared in methanol followed by addition of 50 μl of FeCl2 (2.0 mM). The reaction was initiated by addition of 200 μl of 50 mM ferrozine and the reaction mixture was shaken vigorously and left standing at room temperature for 10 min. Concentration of cinnamaldehyde and eugenol in final reaction mixture was 5–50 μM/ml. After the mixture had reached equilibrium, the absorbance of the pink violet colour solution was measured spectrophotometrically at 562 nm by using a UV-Visible spectrophotometer (Visiscan 067). BHA and EDTA were used for comparison. The control contained FeCl2 and ferrozine, without samples. The percentage inhibition of ferrozine-Fe2+ complex formation was measured in form of metal chelating activity.
$$ \%\ \mathrm{Metal}\ \mathrm{ion}\ \mathrm{c}\mathrm{helating}\ \mathrm{activity}=\left[\left(\mathrm{A}\mathrm{c}-\mathrm{A}\mathrm{s}\right)/\mathrm{A}\mathrm{c}\right]\times 100 $$
Antibacterial activity assessment
Gram negative [Proteus vulgaris (MTCC 7299), Salmonella typhi (MTCC 3917) and Bordetella bronchiseptica (MTCC 6838)] and Gram positive [Bacillus cereus (MTCC 6840) and Streptococcus mutans (MTCC 497)] bacteria were obtained from IMTECH, Chandigarh, India.
Disc diffusion method for antimicrobial activity assay
Antimicrobial activity of each plant extract was determined using Kirby-Bauer disc diffusion method [21]. Briefly, 100 μl of the test bacteria was inoculated in 10 ml of fresh nutrient media until they reached a count of approximately 108cells/ml. From the log phase culture, 100 μl of the microbial suspension was spread onto Muller Hinton agar plates. Sterile discs (diameter 6 mm, Hi Media) were impregnated with 20 μl of the test sample, and placed onto inoculated plates followed by incubation at 37 °C for 24 h. Standard antibiotic discs (Meropenem 10 μg and vancomycin 30 μg) were used as control. Diameter of the inhibition zones were measured in millimeters (mm) and results were reported as average of three replicates.
Evaluation of antiproliferative activity by SRB assay
The in vitro antiproliferative activity of test compounds was determined using sulforhodamine-B dye (SRB) assay [22]. Cell suspension (100 μl, 1 × 105 to 2 × 105 cells per ml depending upon mass doubling time of cells) was grown in 96-well tissue culture plate and incubated for 24 h. Stock solutions of test compounds were prepared in DMSO and serially diluted with growth medium to obtain desired concentrations. 100 μl samples (100 μM) were then added to the wells and cells were further incubated for another 48 h. The cell growth was arrested by layering 50 μl of 50 % TCA and incubated at 40 °C for an hour followed by washing with distilled water and then air-dried. SRB (100 μl, 0.4 % in 1 % acetic acid) was added to each well and plates were incubated at room temperature for 30 min. The unbound SRB dye was washed with 1 % acetic acid and then plates were air dried. Tris–HCl buffer (100 μl, 0.01 M, pH 10.4) was added and the absorbance was recorded on ELISA reader at 540 nm. Each test was done in triplicate. The values are reported as mean ± SD of three replicates.
All experiments were carried out in triplicate and data were represented as mean ± standard deviation (SD). Graphs were prepared using GraphPad Prism software. Data were analyzed using One-way ANOVA and the values of \( P \) <0.05 were considered as statistically significant.
DPPH free radical scavenging assay
Radical scavenging potential of compounds was determined by measuring the degree of discoloration of DPPH solution. Eugenol exhibited strong antioxidant potential (58–81 %) at all test concentrations (0.25–1.0 μM/ml) while cinnamaldehyde showed lower to moderate radical scavenging ability (23–57 %) (Fig. 2). The DPPH radical scavenging potential of eugenol was comparable to the activity shown by the standard antioxidant BHA (62–82 %). IC50 value for eugenol and cinnamaldehyde were found to be 0.495 μM/ml and 0.842 mM/ml, respectively.
DPPH free radical scavenging ability of cinnamaldehyde (Cin) and eugenol (Eug). Results are shown as mean ± SD of three replicates (P <0.05). Concentrations are expressed as mM/ml (Cin) and μM/ml (Eug and BHA)
Reducing power ability of test compounds exhibited the similar pattern as observed in radical scavenging assay. Eugenol showed appreciable reducing ability (absorbance 0.33–1.12) as compared to cinnamaldehyde (0.20–0.62) in the concentration range 0.02–0.1 μM/ml (Fig. 3). However ascorbic acid accounted for slightly higher reducing power (0.36–1.58). Cinnamaldehyde and eugenol exhibited about 55 and 92 % reducing power of ascorbic acid, respectively, at lower concentration (0.02 μM/ml) while at higher concentration (0.1 μM/ml) they showed 40 and 71 % activity of ascorbic acid, respectively.
Reducing power ability of cinnamaldehyde and eugenol. Reducing power and concentration of compounds are expressed as absorbance and μM/ml, respectively. Ascorbic acid was used as standard reducing agent for comparison. Results are shown as mean ± SD (n = 3, P <0.05). Abbreviations: Cin- cinnamaldehyde, Eug-eugenol and Asc- ascorbic acid
Lipid peroxidation inhibition activity
Cinnamaldehyde and eugenol mediated protective effect on metal induced lipid peroxidation was measured in liver homogenate of albino Wistar rats in vitro and is represented as % LPOI (lipid peroxidation inhibition). Test compounds exhibited low to moderate activity. Cinnamaldehyde showed comparatively better protective action (LPOI 11–33 %) against peroxidative damage in the concentration range 1–3 μM/ml (Fig. 4). Under same test conditions eugenol accounted for lower activity (LPOI 5–15 %). BHA was used for comparison, which produced 18–56 % lipoprotective activity. Malondialdehyde produced by lipid peroxidation forms a pink chromogenic substance after reaction with thiobarbituric acid (TBA) which makes the basis for this measurement.
Lipoprotective efficacy of cinnamaldehyde and eugenol in rat liver homogenate. Lipid peroxidation inhibition (% LPOI) was determined at different concentrations (1.0, 2.0, 3.0 μM/ml) as described in methods. BHA was used as standard lipoprotective agent. The results are expressed as mean ± SD of three replicates (P <0.05). Abbreviations: Cin cinnamaldehyde, Eug eugenol, BHA butylated hydroxyanisole
Cinnamaldehyde showed better chelating ability as compared to eugenol at all test concentrations (Fig. 5). BHA and EDTA were used as standard chelating agents for comparison. Cinnamaldehyde exhibited about 45–75 % metal ion chelating ability in the concentration range 5–50 μM/ml which is equivalent to 59–78 % and 90–98 % of the activities demonstrated by EDTA and BHA, respectively (Fig. 5). Similarly eugenol produced 41–60 % activity (EDTA equivalent) and 68–73 % (BHA equivalent).
Metal ion chelating activity (%) of cinnamaldehyde and eugenol. BHA and EDTA were used for comparison. The activity of samples was represented as mean ± SD of three replicates (\( P \) <0.05). Abbreviations: Cin cinnamaldehyde, Eug eugenol, BHA butylated hydroxyanisole and EDTA ethylenediaminetetraacetic acid
Antibacterial activity
Bacteria used in the study showed susceptibility to test samples (Table 1). Cinnamaldehyde exhibited remarkable activity against both Gram positive [B. cereus (MTCC 6840), Streptococcus mutant (MTCC 497)] and Gram negative [P. vulgaris (MTCC7299), S. typhi (MTCC 3917), B. bronchiseptica (MTCC 6838)] bacteria. Concentration dependent response was observed in the activity pattern. At 20 μM/disc inhibition zones produced by cinnamaldehyde ranged between 22 and 30 mm while at concentration 80 μM/disc appreciable activity was observed (inhibition zones 32–40 mm). However saturation effect was observed at further higher concentration (120 μM/disc). Eugenol in general showed lower to moderate response (inhibition zones 9–18 mm) at test concentrations. Meropenem and vancomycin exhibited 18–27 mm zone of inhibition against test bacteria. Results indicated that cinnamaldehyde has potent antibacterial activity.
Table 1 Antibacterial activity of cinnamaldehyde and eugenol
Evaluation of antiproliferative activity
Moderate antiproliferative activity (43–46 %) was observed with cinnamaldehyde against breast (T47D) and lung (NCI-H322) cancer cell lines at 50 μM concentration while very low activity was observed against prostate (PC-3) cancer cell line at the same concentration in SRB assay (Fig. 6). In comparison to cinnamaldehyde, eugenol exhibited lower activity viz., 39, 17 and 13 % against T47D, NCIH-322 and PC-3 cell lines, respectively. Standard anticancer drugs mitomycin (1 μM) used against breast and prostate cancer cell lines and 5-Flurouracil (5 μM) used against lung cancer cell line showed 50–60 % antiproliferative activity only.
Antiproliferative activity of cinnamaldehyde and eugenol against cancer cell lines. Results are expressed as % inhibition on growth of cell lines (mean ± SD, n = 3). Abbreviations: T47D breast, NCI-H322 lung, PC-3 prostate cancer cell lines, Cin cinnamaldehyde, Eug eugenol, ACD anticancer drugs [Mitomycin (1 μM) against breast and prostate cancer cell lines; 5-fluro uracil (5 μM) against lung cancer cell line]
Plants are natural repositories of molecules with diverse structural and functional attributes. Many phytoconstituents exhibit nutritive and pharmacological activities [23–25]. They interact with different molecular and cellular targets including enzymes, hormones, trans-membrane transporters, and neurotransmitter receptors [26, 27]. Number of plant species and their metabolites have been identified and studied for their use in the pharmaceutical, medical, and agricultural industries [28]. Current work reports the biological activities of cinnamaldehyde and eugenol, the two phenylpropanoids, which are abundantly available in cinnamon and clove oils. Cinnamaldehyde occurs naturally in the bark of Cinnamomum zeylanicum, C. cassia and C. camphora. Essential oil of cinnamon bark contains about 80 % cinnamaldehyde and 10 % eugenol while cinnamon leaf oil contains 5 % cinnamaldehyde and about 95 % eugenol [29]. Eugenol is also present in essential oil fractions of S. aromaticum, Myristica fragrans and Ocimum basilicum. About 80–90 % eugenol is present in essential oils obtained from clove bud and leaf [30].
Free radicals play important role in the development of many chronic diseases including heart disease, cancer and the aging process [31]. To counteract their adverse effects' scavenging or reducing the formation of free radicals in the body becomes significant for health. In the present study cinnamaldehyde and eugenol have shown capability to scavenge free radical in DPPH radical scavenging assay (Fig. 2). DPPH is a stable nitrogen centered free radical and its color changes from violet to yellow upon uptake of hydrogen or electrons [32]. The effect of cinnamaldehyde and eugenol on DPPH is thought to be due to their hydrogen donating ability. Eugenol demonstrated greater radical scavenging activity as compared to cinnamaldehyde because it easily donates hydrogen atom of hydroxyl (OH) moiety directly linked to benzene ring [33]. Radical scavenging activity of the compounds depends on number and position of hydroxyl groups on aromatic ring of phenolic compounds and therefore they show ability to reduce the free radical level [25, 34]. Radical scavenging action of eugenol is further supported by the work of Mathew and Emilia [35]. They reported that eugenol exhibited a faster reaction rate and stronger intensities of white-yellow spots on thin layer chromatography (TLC) plates as compared to cinnamaldehyde and cinnamon bark extract [35]. Our results have shown better radical scavenging activity of eugenol (IC50 value 0.495 μM/ml) as compared with the report of Tominaga et al. [36].
Reducing power of bioactive compounds is often used as an indicator of electron donating activity, which is an important mechanism of antioxidant action [37]. Antioxidants can be reductants which inactivate oxidants. The reducing ability is measured by the direct reduction of ferricyanide (Fe3+) to ferrocyanide (Fe2+) which makes a Perl's Prussian blue complex after addition of FeCl3 and absorbance is monitored at 700 nm. Increasing absorbance indicates an increase in reducing ability. The experimental data obtained in the current work indicated remarkable reducing potential in eugenol as compared to cinnamaldehyde (Fig. 3). Comparatively higher reducing power of eugenol might be due to the di and mono hydroxyl substitutions in the aromatic ring, which possess potent hydrogen or electron donating abilities [33, 38].
The process of lipid peroxidation, a free radical mediated chain reaction, is related to injury and inflammation and often associated with oxidative damage of membrane lipids [39]. It is usually initiated by hydroxyl radical which is produced through Fenton reaction in presence of metal ion (Fe2+) [40]. Lipid peroxidation may be enzymatic and non-enzymatic or both. Non enzymatic reaction involves three phases namely, initiation, propagation and termination [41]. Lipid, lipoperoxyl, lipid hydro-peroxide, peroxyl and alcoxyl radicals produced in the first two phases of lipid peroxidation are harmful to the body. Malondialdehyde (MDA) is an important product of lipid peroxidation which reacts with thiobarbituric acid (TBA) to form TBA-MDA adduct with an absorption maxima at 532 nm [42]. In the study cinnamaldehyde accounted for 33 % reduction in adduct formation at 3 μM/ml concentration signifying its lipoprotective action while lower activity was shown by eugenol (Fig. 4). Protective activity shown by test compounds could be attributed to the chelation of metal ion (Fe3+) which is responsible for generation of hydroxyl radical [43]. The antioxidants are believed to intercept the free radical chain of oxidation and donate hydrogen from the phenolic hydroxyl groups, thereby forming a stable end product that does not initiate or propagate further oxidation of lipid.
Transition metals catalyze the formation of the free radicals which initiate and propagate chain reaction in lipid peroxidation. Metal chelating capability indicates efficiency of a compound to protect lipids against oxidative damage [31]. Metal ions are quantitatively measured by ferrozine which make complexes with Fe2+ and gives pink colour. In the presence of chelating agents, the complex formation is disrupted with the result that the pink-red color of the complex is decreased. It has been reported that chelating agents act as a secondary antioxidants and form bonds with a metal because they reduce the redox potential, thereby stabilizing the oxidized form of the metal ion [44].
The present study reports that cinnamaldehyde has greater ion chelating ability as compared to eugenol (Fig. 5). This could be substantiated by the fact that aromatic aldehydes, especially with an effective conjugation system, form stable Schiff bases which are generally bi-, tri- or tetra- dentate chelate ligands and easily react with almost all transition metal ions and form very stable complexes with them. [45]. Moreover eugenol with the meta-methoxy groups and with the olefinic bond far away is not (well) suited for chelation (Fig. 1). This in contrast to that of cis-cinnamaldehyde, which forms chelates readily with low valent transition metals via pi-bonding of C = C and C = O bonds or via bonding of its C = C bond and the lone pair of C = O. Chelation reduces polarity of metal ion because of partial sharing of its positive charge with donor group in chelate ring system, The process of chelation thus increases the lipophilic nature of central ion. This in turn favours it permeation through the lipid layer of membrane which reduces hydroxyl radical generation at the site and thereby prevents initiation of lipid peroxidation [43]. Positive correlations between metal ion chelating ability and lipoprotective activity have also been reported earlier [46].
Emergence of multiple drug resistance in human pathogenic organisms has given momentum to search for new antimicrobial substances from alternative sources. Cinnamaldehyde exhibited considerable antibacterial activity against test bacteria (Table 1). There are reports showing appreciable killing potential in cinnamaldehyde against other bacteria [47, 48]. Comparatively lower activity was observed with eugenol. Cinnamaldehyde and eugenol are major ingredients of essential oils obtained from various species of genus Cinnamomum. A critical property of antibacterial components in essential oils is their hydrophobicity, which helps them to target the lipid-containing bacterial cell membrane and mitochondria [49]. In addition, these molecules can damage membrane proteins, deplete the proton motive force, cause leakage of cell contents and coagulate cytoplasm [50, 51]. Although these mechanisms might act independently, some of them could be activated as a consequence of another, resulting in a multiplicity of mechanisms [49].
The study performed by Shang et al. on four cinnamaldehyde congeners having similar structures proved that a conjugated double bond and a long CH chain outside the ring is responsible for better antibacterial activity of cinnamaldehyde. However presence of the hydroxyl group has also been shown to improve the antibacterial activity [52]. Trans-cinnamaldehyde has been reported to possess antimicrobial activity toward a wide range of foodborne pathogens [53, 54]. Eugenol has been shown to produce antibacterial effect against Salmonella typhi by damaging cytoplasmic membrane and causes subsequent leakage of intracellular constituents [55].
Cancer chemotherapeutic agent can often provide prolongation of life, temporary relief from symptoms and occasionally complete remission. A successful anticancer drug should kill or incapacitate cancer cells by inducing apoptosis in cancer cell. Toxicity caused by chemopreventive agents imposes restriction on their frequent usage and patients seek alternative methods of treatment. Hence there is need for developing new approaches and drugs from natural sources to treat cancer. Many important anticancer drugs are derived from plant sources such as taxol and camptothecin [38]. The free radical, especially hydroxyl radical, has ability to add to double bonds of DNA bases. It abstracts an H-atom from the methyl group of thymine and each of the five carbon atoms of deoxyribose at a very high rate constant resulting in permanent modification of genetic material leading to malfunctions of cellular process. Thus free radicals represent the first step involved in carcinogenesis [56]. Hence radical scavenging action shown by cinnamaldehyde and eugenol might play role in inhibiting initiation of carcinogenesis.
In the study cinnamaldehyde and eugenol displayed moderate antiproliferative activity against breast (T47D) and lung (NCI-H322) cancer cell lines (Fig. 6) showing anticancer potential which signifies their role in inhibiting cancer progression. Drugs being used for the cancer treatment follow different mechanisms of action. A number of herbal products have been reported to induce cell cycle arrest and thereby play important role in cancer prevention and therapy [57]. Furthermore, cinnamaldehyde has also been shown to inhibit cyclin dependent kinases (CDKs) which are involved in cell cycle regulation [58]. Many workers have reported cytotoxic effect of eugenol against human osteoblast (U2OS), fibroblast (HFF) and hepatoma (HepG2) cell lines [59, 60]. The modulation of angiogenesis, DNA (synthesis, transcription and translation), enzyme activity and microtubule inhibition remains an important therapeutic strategy against numerous diseases, including cancer [61]. Hence further studies are required for understanding the mechanism of action of cinnamaldehyde and eugenol as pharmacological agents.
Cinnamaldehyde and eugenol exhibited considerable antioxidants, antimicrobials and moderate cytotoxic activities. Cinnamaldehyde showed lipo-protective and metal ion chelating abilities while eugenol accounted for radical scavenging and reducing activities. Cinnamaldehyde demonstrated appreciable antibacterial activity as compared to eugenol. The study will provide insight to researchers for utilization of these compounds for food and medicinal applications.
ACD, anticancer drugs; Asc, ascorbic acid; BHA, butylated hydroxyanisole; BHT, butylated hydroxytoluene; Cin, cinnamaldehyde; DPPH- 2, 2-Diphenyl-1-picryl hydrazyl; EDTA, ethylenediaminetetraacetic acid; Eug, eugenol; LPOI, lipid peroxidation inhibition; MTCC, microbial type culture collections; ROS, reactive oxygen species; SRB, sulforhodamine B dye
Maxwell SR. Prospects for the use of antioxidant therapies. Drugs. 1995;49:345–61.
Lopaczynski W, Zeisel SH. Antioxidants, programmed cell death, and cancer. Nutrition Res. 2001;21:295–307.
Halliwell B. Free radicals, antioxidants and human disease: curiosity, cause, or consequence. Lancet. 1994;344:721–4.
Ames BN, Shigenaga MK, Hagen TM. Oxidants, antioxidants and the degenerative diseases of aging. Proc Natl Acad Sci U S A. 1993;90:7915–22.
Barcellos-Hoff MH. Integrative radiation carcinogenesis interactions between cell and tissue responses to DNA damage. Semin Cancer Biol. 2005;15:138–48.
Moukette BM, Pieme CA, Njimou JR, Biapa CPN, Marco B, Ngogang JY. In vitro antioxidant properties, free radicals scavenging activities of extracts and polyphenol composition of a non-timber forest product used as spice: Monodora myristica. Biol Res. 2015;48:15.
Pandey AK, Mishra AK, Mishra A, Kumar S, Chandra A. Therapeutic potential of C. zeylanicum extracts: an antifungal and antioxidant perspective. Int J Biol Med Res. 2010;1:228–33.
Kahl R, Kappus H. Toxicity of synthetic antioxidants BHT and BHA in comparison with natural antioxidants vitamin E. Z Lebensm Unters Forsch. 1993;196:329–38.
Mishra A, Kumar S, Bhargava A, Sharma B, Pandey AK. Studies on in vitro antioxidant and antistaphylococcal activities of some important medicinal plants. Cell Mol Biol. 2011;57:16–25.
Walker SJ. Major spoilage microorganisms in milk and dairy products. J Soc Dairy Technol. 1988;41:91–2.
Brewer MS, Sprouls GK, Russon C. Consumer attitudes toward food safety issues. J Food Safety. 1994;14:63–76.
Tripoli E, Guardia ML, Giammanco S, Majo DD, Giammanco M. Citrus flavonoids: molecular structure, biological activity and nutritional properties: a review. Food Chem. 2007;104:466–79.
Mishra A, Kumar S, Pandey AK. Scientific validation of the medicinal efficacy of Tinospora cordifolia. Sci World J. 2013;2013:292934.
Mishra AK, Mishra A, Kehri HK, Sharma B, Pandey AK. Inhibitory activity of Indian spice plant Cinnamomum zeylanicum extracts against Alternaria solani and Curvularia lunata, the pathogenic dematiaceous moulds. Ann Clin Microbiol Antimicrob. 2009;8:9.
Pandey AK, Mishra AK, Mishra A. Antifungal and antioxidative potential of oil and extracts derived from leaves of Indian spice plant Cinnamomum tamala. Cell Mol Biol. 2012;58:142–7.
Jaganathan SK, Supriyanto E. Antiproliferative and molecular mechanism of eugenol-induced apoptosis in cancer cells. Molecules. 2012;17:6290–304.
Singh RP, Chidambara Murthy KN, Jayapakash GK. Studies on the antioxidant activity of pomegranate (Punica granatum) peel and seed extracts using in vitro models. J Agric Food Chem. 2002;50:81–6.
Oyaizu M. Studies on products of browning reactions: anti oxidative activities of products of browning reaction prepared from glucosamine. Jap J Nutr. 1986;44:307–15.
Halliwell B, Gutteridge JNC. Mechanism of damage of cellular targets by oxidative stress: lipid peroxidation. In: Free Radicals in Biology and Medicine. Oxford: Oxford University Press; 1999. p. 284–313.
Dinis TCP, Madeira VMC, Almeida LM. Action of phenolic derivates (acetoaminophen, salicylate and 5-aminosalycilate) as inhibitors of membrane lipid peroxidation and as peroxyl radical scavengers. Arch Biochem Biophys. 1994;315:161–9.
Bauer AW, Kirby WM, Sherris JC, Turck M. Antibiotic susceptibility testing by a standardized single disk method. Am J Clin Pathol. 1966;45:493–6.
Skehan P, Storeng R, Scudiero D, Monks A, McMahon J. New colorimetric cytotoxicity assay for anticancer drug screening. J Natl Cancer Inst. 1990;82:1107–12.
Mishra AK, Mishra A, Bhargava A, Pandey AK. Antimicrobial activity of essential oils from the leaves of Cinnamomum spp. Natl Acad Sci Lett. 2008;31:341–5.
Mishra AK, Singh BK, Pandey AK. In vitro-antibacterial activity and phytochemical profiles of Cinnamomum tamala (Tejpat) leaf extracts and oil. Rev Infect. 2010;1:134–9.
Kumar S, Pandey AK. Chemistry and biological activities of flavonoids: an overview. Sci World J. 2013;2013:162750.
Harborne JB. Twenty-five years of chemical ecology. Nat Prod Rep. 2001;18:361–79.
Wink M. Evolution of toxins and anti-nutritional factors in plants with special emphasis on Leguminosae. In: Acamovic T, Stewart CS, Pennycott TW, editors. Poisonous plants and related toxins. Wallingford: CABI Publishing; 2004. p. 1–25.
Wink M. The plant vacuole - a multifunctional compartment. J Exp Bot. 1993;44:231–46.
Rao PV, Gan SH. Cinnamon: a multifaceted medicinal plant. Evid Based Comp Altern Med. 2014;2014:642942.
Barnes J, Anderson LA, Phillipson JD. Herbal medicines. 3rd ed. London: Pharmaceutical Press; 2007.
Kumar S, Gupta A, Pandey AK. Calotropis procera root extract has the capability to combat free radical mediated damage. ISRN Pharmacol. 2013;2013:691372.
Kumar S, Pandey AK. Medicinal attributes of Solanum xanthocarpum fruit consumed by several tribal communities as food: an in vitro antioxidant, anticancer and anti HIV perspective. BMC Complement Altern Med. 2014;14:112.
Gulcin I. Antioxidant activity of eugenol: a structure-activity relationship study. J Med Food. 2011;14:975–85.
Brand-Williams W, Cuvelier ME, Berset C. Use of a free radical method to evaluate antioxidant activity. LWT-Food Sci Technol. 1995;28:25–30.
Mathew S, Emilia AT. Studies on the antioxidant activities of cinnamon (Cinnamomum verum) bark extracts, through various in vitro models. Food Chem. 2006;94:520–8.
Tominaga H, Kobayashi Y, Goto T, Kasemura K, Nomura M. DPPH radical-scavenging effect of several phenylpropanoid compounds and their glycoside derivatives. Yakugaku Zasshi. 2005;125:371–5.
Gulcin I, Oktay M, Kirecci E, Kufrevioglu OI. Screening of antioxidant and antimicrobial activities of anise (Pimpinella anisum L.) seed extracts. Food Chem. 2003;83:371–82.
Shimada K, Fujikawa K, Yahara K, Nakamura T. Antioxidative properties of xanthan on the autooxidation of soybean oil in cyclodextrin emulsion. J Agric Food Chem. 1992;40:945–8.
Sharma AK, Kumar S, Pandey AK. Ferric reducing, anti-radical and cytotoxic activities of Tinospora cordifolia stem extracts. Biochem Anal Biochem. 2014;3:153.
Ganz T. Hepcidin, a key regulator of iron metabolism and mediator of anemia of inflammation. Blood. 2003;102:783–8.
Guéraud F, Atalay M, Bresgen N, Cipak A, Eckl PM. Chemistry and biochemistry of lipid peroxidation products. Free Radic Res. 2010;44:1098–124.
Kumar S, Mishra A, Pandey AK. Antioxidant mediated protective effect of Parthenium hysterophorus against oxidative damage using in vitro models. BMC Complement Altern Med. 2013;13:120.
Shreaz S, Rayees A, Rimple SB, Hashmi AA, Nikhat M, Khan LA. Anticandidal activity of cinnamaldehyde, its ligand and Ni(II) complex: effect of increase in ring and side chain. Microb Pathog. 2010;49:75e82.
Gordon MH. The mechanism of the antioxidant action in vitro. In: Hudson BJF, editor. Food antioxidants. London/New York: Elsevier; 1990. p. 1–18.
Adabiardakani A, Mohammad HHK. Cinnamaldehyde schiff base derivatives: a short review. World Appl Program. 2012;2:472–6.
Kumar S, Kumar R, Dwivedi A, Pandey AK. In vitro antioxidant, antibacterial, and cytotoxic activity and in vivo effect of Syngonium podophyllum and Eichhornia crassipes leaf extracts on isoniazid induced oxidative stress and hepatic markers. BioMed Res Int. 2014;2014:459452.
McCann J. Herbal medicine handbook. 2nd ed. Philadelphia: Lippincott; 2003.
Ates DA, Erdogrul OT. Antimicrobial activities of various medicinal and commercial plant extracts. Turk J Biol. 2003;27:157–62.
Burt S. Essential oils: their antibacterial properties and potential applications in food- a review. Int J Food Microbiol. 2004;94:223–53.
Ultee A, Kets EPW, Smid EJ. Mechanisms of action of carvacrol on the food-borne pathogen Bacillus cereus. Appl Environ Microbiol. 1999;65:4606–10.
Sikkema J, De Bont JAM, Poolman B. Interactions of cyclic hydrocarbons with biological membranes. J Biol Chem. 1994;269:8022–8.
Chang ST, Chen PF, Chang SC. Antibacterial activity of leaf essential oils and their constituents from Cinnamomum osmophloeum. J Ethnopharmacol. 2001;77:123–7.
Friedman M, Henik PR, Mandrell RE. Bactericidal activities of plant essential oils and some of their isolated constituents against Campylobacter jejuni, Escherichia coli, Listeria monocytogenes, and Salmonella enterica. J Food Prot. 2002;65:1545–60.
Baskaran SA, Kazmer GW, Hinckley L, Andrew SM, Venkitanarayanan K. Antibacterial effect of plant-derived antimicrobials on major bacterial mastitis pathogens in vitro. J Dairy Sci. 2009;92:1423–9.
Devi KP, Arif NS, Sakthivel R, Karutha PS. Eugenol (an essential oil of clove) acts as an antibacterial agent against Salmonella typhi by disrupting the cellular membrane. J Ethnopharmacol. 2010;130:107–15.
Dizdaroglu M, Jaruga P, Birincioglu M, Rodriguez H. Free radical-induced damage to DNA: mechanisms and measurement. Free Radic Biol Med. 2002;32:1102–15.
Sharma AK, Kumar S, Chashoo G, Saxena AK, Pandey AK. Cell cycle inhibitory activity of Piper longum against A549 cell line and its protective effect against metal-induced toxicity in rats. Ind J Biochem Biophys. 2014;51:358–64.
Jeong HW, Kim MR, Son KH, Han MY, Ha JH, Garnier M, Meijer L, Kwon BM. Cinnamaldehydes inhibit cyclin dependent kinase 4/cyclin D1. Bioorg Med Chem Lett. 2000;10:1819–22.
Ho YC, Huang FM, Chang YC. Mechanisms of cytotoxicity of eugenol in human osteoblastic cells in vitro. Int Endodontic J. 2006;39:389–93.
Babich H, Stern A, Borenfreund E. Eugenol cytotoxicity evaluated with continuous cell lines. Toxicol In Vitro. 1993;7:105–9.
Kumar S, Ahmad MK, Waseem M, Pandey AK. Drug targets for cancer treatment: an overview. Med Chem. 2015;5:115–23.
UKS and AKS acknowledge financial support from University of Allahabad, Allahabad, India in the form of UGC CRET research fellowships. Authors also acknowledge DST-FIST facility of the Department of Biochemistry, University of Allahabad, Allahabad for carrying out the work. Authors are grateful to Dr. A. K. Saxena, Ex-Chief Scientist, IIIM Jammu, India for providing necessary help.
Research activity was supported by University of Allahabad, Allahabad in the form of UGC CRET research fellowships to UKS and AKS.
Datasets have been presented in the manuscript.
UKS and AKS carried out all the experimental work and prepared the first draft of the manuscript. AKP participated in the research design, analysis of the data and final drafting of the manuscript. All authors have read and approved the final manuscript.
The in vitro studies were performed in accordance with the guide for the care and use of laboratory animals, as adopted and promulgated by the Institutional Animal Care Committee, CPCSEA, India and approved by Institutional Animal Ethics Committee, University of Allahabad, Allahabad.
Department of Biochemistry, University of Allahabad, Allahabad, 211002, India
Uma Kant Sharma, Amit Kumar Sharma & Abhay K. Pandey
Uma Kant Sharma
Amit Kumar Sharma
Abhay K. Pandey
Correspondence to Abhay K. Pandey.
Sharma, U.K., Sharma, A.K. & Pandey, A.K. Medicinal attributes of major phenylpropanoids present in cinnamon. BMC Complement Altern Med 16, 156 (2016). https://doi.org/10.1186/s12906-016-1147-4
Phenylpropanoid
Free radical
Antiproliferative
Lipoprotective | CommonCrawl |
Channel supports in massive MIMO spatial channel
Proposed methods
RIP-based performance analyses
Downlink compressive channel estimation with support diagnosis in FDD massive MIMO
Wei Lu1Email authorView ORCID ID profile,
Yongliang Wang1,
Qiqing Fang1 and
Shixin Peng2
Downlink channel state information (CSI) is critical in a frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) system. We exploit the reciprocity between uplink and downlink channels in angular domain and diagnose the supports of downlink channel from the estimated uplink channel. While the basis mismatch effects will damage the sparsity level and the path angle deviations between uplink and downlink transmission paths will induce differences in channel supports, a downlink support diagnosis algorithm based on the DBSCAN (density-based spatial clustering of applications with noise) which is widely used in machine learning is presented. With the diagnosed supports of downlink channel in angular domain, a weighted subspace pursuit (SP) channel estimation algorithm for FDD massive MIMO is proposed. The restricted isometry property (RIP)-based performance analysis for the weighted SP algorithm is given out. Both the analysis and the simulation results show that the proposed downlink channel estimation with diagnosed supports is superior to the standard iteratively reweighted least squares (IRLS) and SP without channel priori or with the assumption of the common supports for uplink and downlink channels in angular domain.
Support diagnosis
Compressive channel estimation
Weighted subspace pursuit
Spectrum and radio resources in the forthcoming new communication systems are valuable for efficient transmission. Cognitive radio technologies are used for spectrum sensing to reduce the spectrum idle rate, while compressed sensing technologies are applied to improve utilization efficiency of radio resource [1, 2]. Compressed sensing (CS) can be applied into a massive multiple-input multiple-output (MIMO) system for efficient transmission. It is crucial to have accurate channel state information (CSI) at transmitter for downlink beamforming in massive MIMO. In a time division duplexing (TDD) massive MIMO system, downlink CSI can be obtained by exploiting the channel reciprocity from the uplink channel. In frequency division duplexing (FDD) massive MIMO system, downlink CSI feedback is challenging since the training and feedback overhead are proportional to the antenna number at the base station (BS) if the feedback scheme in LTE is adopted [3]. Motivated by the framework of CS and the sparsity of massive MIMO channel in angular domain also known as spatial domain, applications of CS to massive MIMO channel estimation and feedback have been intensively studied.
In the compressive channel estimation, it can benefit from the CS technology to reduce the training and feedback overhead and profit from the priori knowledge about the channel support to improve the estimation performance further. The users feed the compressed training measurements back to the BS to reduce feedback overhead, and orthogonal matching pursuit (OMP) is used for downlink CSI recovery in [4]. In [5], the modified basis pursuit (MBP) is proposed by utilizing the partial priori signal support information to improve the recovery performance. In [6], Bayesian estimation of sparse massive MIMO channel is developed in which neighboring antennas share among each other their information about the channel support. A weighted CS-based uplink channel estimation is considered in TDD massive MIMO, and the previous estimated channel is used for generating weights for CS recovery in [7]. In [8], the previous channel support is used for the initialization of the estimated support for subspace pursuit. In [9], the authors consider the incorrect indices in the previous support set and exclude them adaptively. From these researches, it can be seen that support priori can improve the recovery performance in massive MIMO, and compressive channel estimation can benefit from support priori by channel reciprocity in TDD system.
CS can also be used for channel estimation for FDD massive MIMO system in single-user (SU) scenario and multiple-user (MU) scenario. In single-user scenario, in [10], it makes use of the previous estimated support information and the burst structured sparsity for massive MIMO channel estimation. In [11], it separates the channel vector into a dense vector and a sparse vector and makes use of the previous channel to predict the dense vector by least squares algorithm and applies CS to estimate the sparse vector. In [12], it examines the impacts on the training overhead in FDD downlink channel estimation when previous channel support information is applied into a weighted l1 minimization framework. In multiuser scenario, in [13], it proposes a close-loop pilot and CSIT feedback resource adaptation framework for MU massive MIMO, and the joint sparsity among users is used for compressive channel estimation. In [14], a two-stage weighted block l1 minimization algorithm is proposed for downlink CSI estimation in FDD massive MIMO system, and the priori knowledge that MU channels share common supports is used. In [15], it takes a variational Bayesian inference-based approach for FDD channel estimation, and the partially joint sparsity shared by different users is captured. From the researches above, it can be seen that FDD massive MIMO channel estimation can benefit from the CS framework, and the support information from the previous estimated channel or from the structured sparsity among multiusers can improve the estimation performance.
On the other hand, in the FDD system, the propagation environment is almost the same for the uplink and downlink transmissions in short interval. The number of significant multipaths, path delays, and path angles are almost the same for uplink and downlink [16]. There exists reciprocity for uplink and downlink in massive MIMO in angular domain which can also be used for efficient compressive channel estimation. In [17], it explores the reciprocal channel characteristics of the dominant propagation path in FDD for channel feedback; however, the CS framework is not adopted. In [18], the joint overcomplete dictionary learning for uplink and downlink is proposed to get better channel estimation performance, and the dictionary learning is based on the assumption that the supports for uplink and downlink are common. It can be seen that reciprocity in angular domain exists, and the reciprocity can also be used in compressive channel estimation.
From the researches above, it can be concluded that the priori information, such as channel supports among neighboring antennas, or in adjacent time frames, or between uplink and downlink, can all improve the channel estimation performance. However, most of them rarely consider the basis mismatch effects which will deteriorate the sparsity. In [19], a sparsity-enhancing algorithm is discussed for compressive estimation of doubly selective multicarrier channels, while the leakage effect is discussed but not for massive MIMO. It is not practical to assume the common supports for uplink and downlink channels in the FDD system. On the other hand, the reciprocity for massive MIMO uplink and downlink channel in the angular domain is not discussed adequately.
In this paper, we present an efficient channel estimation algorithm in massive MIMO FDD system. Specially inspired by priori knowledge of channel reciprocity in angular domain for uplink and downlink channels, we propose a weighted SP channel estimation with the diagnosed supports from the uplink channel. Moreover, the impacts on the sparsity caused by basis mismatch and angle deviation in massive MIMO are discussed. According to [20], the radio paths are arriving in clusters with angle spread. By utilizing the cluster properties, we apply the DBSCAN algorithm (density-based spatial clustering of applications with noise) to extract the support information of the uplink channel, and a DBSCAN-based support diagnosis algorithm is proposed to get the probable support of downlink channel, then a weighted subspace pursuit channel estimation is presented.
The contributions of the paper are listed as follows:
A channel support diagnosis algorithm based on DBSCAN is proposed, in which the reciprocity between uplink and downlink channels in angular domain is used.
A weighted SP algorithm is proposed for massive MIMO channel estimation, and the RIP-based performance analysis for weighted SP is proposed, which shows the superiority of the weighted SP compared with the standard SP.
The rest of the paper is organized as follows. The system model is described in Section 2. The support analysis of massive MIMO channel in angular or spatial domain is given out in Section 3. In Section 4, the weighted SP for massive MIMO channel estimation based on the diagnosed support is presented. The RIP-based performance analysis and simulations are illustrated in Sections 5 and 6. Finally, the conclusions are drawn in Section 7.
Notation: A T is a matrix composed of the columns of the matrix A of the set T. T c symbolizes the complementary set of T. resid(·) is the residue vector after projection operation, for example, resid(y, A) = y − AA†y, where A† is the pseudoinverse of the matrix A. δ k is the parameter for k-RIP condition.
2 System model
We consider a scenario of a single user in massive MIMO FDD and assume the BS is equipped with N antennas and the user terminal (UT) has a single antenna. For the uplink channel estimation, the uplink training received by the BS can be written as
$$ {\boldsymbol{y}}^u=\sqrt{\rho^u}{\boldsymbol{h}}^u\boldsymbol{a}+{\boldsymbol{n}}^u $$
where h u ∈ ℂN × 1 is the uplink channel, \( \boldsymbol{a}\in {\mathbb{C}}^{1\times {T}^u} \) is the uplink pilots, T u is the pilot length, ρ u is the uplink received power, \( {\boldsymbol{n}}^u\in {\mathbb{C}}^{N\times {T}^u} \) is the received noise with each element to be i.i.d Gaussian with mean 0 and variance σ2, and \( {\boldsymbol{y}}^u\in {\mathbb{C}}^{N\times {T}^u} \) is the received signal at BS.
For the downlink channel estimation in the FDD system, the BS transmits the pilots to UT. The UT receives the pilots and feeds the received signal back to the BS directly as in [21]. The received signal y d at the UT can be written as
$$ {\boldsymbol{y}}^d=\sqrt{\rho^d}{\boldsymbol{Ah}}^d+{\boldsymbol{n}}^d $$
where h d ∈ ℂN × 1 is the downlink channel, \( \boldsymbol{A}\in {\mathbb{C}}^{T^d\times N} \) is the downlink pilots, T d is the pilot length, ρ d is the downlink received power, \( {\boldsymbol{n}}^d\in {\mathbb{C}}^{T^d\times 1} \) is the received noise with each element to be i.i.d Gaussian with mean 0 and variance σ2, and \( {\boldsymbol{y}}^d\in {\mathbb{C}}^{T^d\times 1} \) is the received signal at UT.
Since the massive MIMO channel exhibits spatial sparsity in angular/spatial domain, CS can be applied to the compressive channel estimation with much less measurements which means that T d < N. In the compressive channel estimation, the uplink channel estimation in (1) can be represented as
$$ {\widehat{\boldsymbol{h}}}_a^u=\mathrm{argmin}{\left\Vert {\boldsymbol{h}}_a^u\right\Vert}_0,\mathrm{subject}\ \mathrm{to}\ {\left\Vert {\boldsymbol{y}}^u-\sqrt{\rho^u}{\boldsymbol{D}}^u{\boldsymbol{h}}_a^u\boldsymbol{a}\right\Vert}_2\le \varepsilon $$
where D u ∈ ℂN × Nis the channel basis matrix for uplink channel, \( {\boldsymbol{h}}_a^u \) is the sparse spatial representation with \( {\boldsymbol{h}}^{\boldsymbol{u}}={\boldsymbol{D}}^u{\boldsymbol{h}}_a^u \), and \( {\widehat{\boldsymbol{h}}}_a^u \) is the estimated uplink channel in the spatial domain. Similarly, the downlink channel estimation in (2) can be represented as
$$ {\widehat{\boldsymbol{h}}}_a^d=\mathrm{argmin}{\left\Vert {\boldsymbol{h}}_a^d\right\Vert}_0,\mathrm{subject}\ \mathrm{to}\ {\left\Vert {\boldsymbol{y}}^d-\sqrt{\rho^d}{\boldsymbol{AD}}^d{\boldsymbol{h}}_a^d\right\Vert}_2\le \varepsilon $$
where D d ∈ ℂN × N is the channel basis matrix for downlink channel, \( {\boldsymbol{h}}_a^d \) is the sparse representation with \( {\boldsymbol{h}}^d={\boldsymbol{D}}^d{\boldsymbol{h}}_a^d \), and \( {\widehat{\boldsymbol{h}}}_a^d \) is the estimated downlink channel in the spatial domain.
In most existing literature problems, (3) and (4) are solved separately. A more effective method is to solve (3) and (4) jointly. Channel support refers in particular to the set comprised by the locations of nonzero elements of channel representation in the angular/spatial domain in this paper. In [18], it is assumed that \( {\boldsymbol{h}}_a^u \) and \( {\boldsymbol{h}}_a^d \) have the same supports and get the sparse solutions \( {\widehat{\boldsymbol{h}}}_a^u \), \( {\widehat{\boldsymbol{h}}}_a^d \) and the dictionary matrices D u ,D d iteratively. The uplink and downlink channel estimation benefits from the joint dictionary learning. However, the assumption of common supports of \( {\boldsymbol{h}}_a^u \) and \( {\boldsymbol{h}}_a^d \) is not practical because of basis mismatch and angle deviation between uplink and downlink. In the following, we will discuss the partial common supports of \( {\boldsymbol{h}}_a^u \) and \( {\boldsymbol{h}}_a^d \) based on the reciprocity in angular domain in massive MIMO channel and then propose a support diagnosis algorithm which can provide support priori information for compressive channel estimation.
3 Channel supports in massive MIMO spatial channel
We discuss the basis mismatch and leakage effects on the support in massive MIMO spatial channel first and then take the angle deviation between uplink and downlink into considerations for support analysis.
3.1 Mismatch of MIMO channel basis
In massive MIMO, the dominant physical paths between the transmitter and the receiver are relatively small compared to the number of antennas. For example, in the proposed channel model in the report 3GPP TS36.900 [12], the number of dominating physical path is six. So there is channel sparsity in angular domain. Similarly, as we did in [22], the massive MIMO channel can be given by
$$ \boldsymbol{H}={\sum \limits}_{i=1}^p{a}_i{\boldsymbol{e}}_r\left({\Omega}_{ri}\right){\boldsymbol{e}}_t{\left({\Omega}_{ti}\right)}^H $$
where H is the physical channel model of MIMO, a i is the attenuation of the ith path, and θ ti and θ ri are the angle of departure and the angle of arrival (Ω ti :=cosθ ti , Ω ri :=cosθ ri ), respectively; e r and e t are the steering vectors of the receiving and transmitting antenna arrays, respectively, and are given by
$$ {a}_i:= \kern0.5em {\beta}_{\mathrm{i}}\sqrt{N_t{N}_r}\exp \left(-\frac{j2\pi {d}_i}{\lambda_c}\right) $$
$$ {\mathbf{e}}_r\left(\Omega \right):= \kern0.5em \frac{1}{\sqrt{N_r}}{\left[1\kern0.5em \exp \left(-j2\pi {\Delta}_r\Omega \right)\kern0.5em \cdots \kern0.5em \exp \left(-j2\pi \left({N}_r-1\right){\Delta}_r\Omega \right)\right]}^T $$
$$ {\mathbf{e}}_t\left(\Omega \right):= \kern0.5em \frac{1}{\sqrt{N_t}}{\left[1\kern0.5em \exp \left(-j2\pi {\Delta}_t\Omega \right)\kern0.5em \cdots \kern0.5em \exp \left(-j2\pi \left({N}_t-1\right){\Delta}_t\Omega \right)\right]}^T $$
where d i and β i are the distance and large scale fading of the ith path, respectively, and Δ r and Δ t are the normalized separation at the receiving and transmitting antenna arrays by the wavelength λ c of the transmitted signal, respectively.
We formulate the uplink and downlink channel basis matrices as in [22]:
$$ {\boldsymbol{U}}_t=\left[{\boldsymbol{e}}_t(0),{\boldsymbol{e}}_t\left(\frac{1}{L_t}\right),\cdots, {\boldsymbol{e}}_t\left(\frac{N_t-1}{L_t}\right)\right] $$
$$ {\boldsymbol{U}}_r=\left[{\boldsymbol{e}}_r(0),{\boldsymbol{e}}_r\left(\frac{1}{L_r}\right),\cdots, {\boldsymbol{e}}_r\left(\frac{N_r-1}{L_r}\right)\right] $$
where L t = N t ∆ t , and L r = N r ∆ r . Then, the spatial massive MIMO channel can represented as
$$ {\boldsymbol{H}}^{\prime }={\boldsymbol{U}}_{\boldsymbol{r}}^{\boldsymbol{H}}\boldsymbol{H}{\boldsymbol{U}}_{\boldsymbol{t}} $$
where H′ is the MIMO channel in the angular/spatial domain. Then, the (kth, lth) entry in channel matrix H′ is
$$ {\displaystyle \begin{array}{c}{h}_{kl}^{\prime }={\mathbf{e}}_{\mathbf{r}}^H\left(\frac{k}{L_r}\right)\left(\sum \limits_{i=1}^P{a}_i{\mathbf{e}}_{\mathbf{r}}\left({\Omega}_{ri}\right){\mathbf{e}}_{\mathbf{t}}{\left({\Omega}_{ti}\right)}^H\right){\mathbf{e}}_{\mathbf{t}}\left(\frac{l}{L_t}\right)\\ {}=\sum \limits_{i=1}^P{a}_i{\mathbf{e}}_{\mathbf{r}}^H\left(\frac{k}{L_r}\right){\mathbf{e}}_{\mathbf{r}}\left({\Omega}_{ri}\right){\mathbf{e}}_{\mathbf{t}}{\left({\Omega}_{ti}\right)}^H{\mathbf{e}}_{\mathbf{t}}\left(\frac{l}{L_t}\right)\\ {}=\sum \limits_{i=1}^P{a}_i{f}_r\left(\frac{k}{L_r}-{\Omega}_{ri}\right)\cdot {f}_t\left({\Omega}_{ti}-\frac{l}{L_t}\right)\end{array}} $$
where f r (∙) and f t (∙) have the forms as below:
$$ {f}_r\left(\Omega -{\Omega}^{\prime}\right)={\boldsymbol{e}}_r^H\left(\Omega \right){\boldsymbol{e}}_r\left({\Omega}^{\prime}\right) $$
$$ {f}_t\left(\Omega -{\Omega}^{\prime}\right)={\boldsymbol{e}}_t^H\left(\Omega \right){\boldsymbol{e}}_t\left({\Omega}^{\prime}\right) $$
If Ωrj and Ω tj are not equal to m/L r and n/L t in (12), there are leakage effects on the entries of H′; then, we have
$$ {h}_{mn}^{\prime }={a}_j+\sum \limits_{i=1,i\ne j}^P{a}_i{f}_r\left(\frac{k}{L_r}-{\Omega}_{ri}\right)\cdot {f}_t\left({\Omega}_{ti}-\frac{l}{L_t}\right) $$
where the second part of (15) is the leakage effects of other paths. If the angle of departure (AOD) and angle of arrival (AOA) of the paths are integer multiples of 1/L r and 1/L t , most of the entries of channel matrix in angular/spatial domain are 0. Otherwise, some entries of channel matrix in angular/spatial domain are not 0 because of the leakage effect even though there are no paths in the corresponding directions of the entries in channel matrix. It should be noticed that in the paper, the UT is equipped with single antenna; however, the analysis is also valid.
3.2 Support difference between uplink and downlink massive MIMO channel
In [23], it is assumed that the propagation environment is static during uplink-downlink transmissions in FDD; hence, the same multipaths, UE positions, and DOA for uplink are also valid for downlink; however, the assumption is not always valid in practical scenarios. As shown in Fig. 1, the DOA and AOA for uplink and downlink channels are not exactly the same but with some deviations. Based on the analysis of channel basis mismatch, the supports of uplink and downlink spatial channels are not exactly the same as shown in Fig. 1. However, the probable support of downlink channel can be inferred from the uplink channel, which we call support diagnosis in the paper.
Angle deviation and sparsity in spatial channel
In the practical scenario, the angle of transmission path for uplink and downlink will change slightly because of UE movement and environment change. On the other hand, in the spatial channel model in 3GPP TS36.900 report, it assumes that there are six paths, but it demonstrates that the mean angle spread is 5° in suburban macrocell. Hence, it is practical to assume that the channel supports for uplink and downlink are partially common because of the angle deviation between uplink and downlink. Next, we will discuss the effects of angle deviation for uplink and downlink transmission on the channel supports.
Property 1: If the angle deviation for one path in uplink and downlink is Δ at BS as shown in Fig. 1, the deviation of the support center is bounded by \( {L}_r\sqrt{2-2\mathit{\cos}\Delta} \).
Proof: If we consider the angle difference for one transmission path and the AOA in the uplink at the BS is Ω ti :=cosθ ti , the DOA in the downlink at the BS is Ω ri :=cosθ ri . In a practical scenario, θ ti and θ ri are not equal and with the difference Δ. Then, we have
$$ \cos {\uptheta}_{ri}-\cos {\uptheta}_{ti}=\cos \left({\uptheta}_{ti}+\Delta \right)-\cos {\uptheta}_{ti}=\sqrt{2-2\mathit{\cos}\Delta}\mathit{\sin}\left(\varphi -{\uptheta}_{ti}\right) $$
where tanφ = (cosΔ − 1)/ sin Δ. Then, if the angle difference is Δ, we have
$$ \left|\cos {\uptheta}_{ri}-\cos {\uptheta}_{ti}\right|\le \sqrt{2-2\mathit{\cos}\Delta} $$
If the channel basis matrices are given as (9) and (10), the sampling interval in channel basis is 1/L r ; the position change of the support center will be ⌈|cosθ ri − cos θ ti |/(1/L r )⌉ and can be bounded as
$$ \left|\cos {\uptheta}_{ri}-\cos {\uptheta}_{ti}\right|/\left(1/{L}_r\right)\le {L}_r\sqrt{2-2\mathit{\cos}\Delta} $$
From property 1, it can be seen that support position is related to the antenna array size Lr. The deviation of the support center is shown as Fig. 2b. In the following, we discuss the sparsity deterioration because of basis mismatch.
a Sparsity in spatial channel and b uplink and downlink supports with angle deviation
Property 2: The path angle arriving at the BS is θ, and Ω = cosθ. The channel basis matrices are defined as (9) and (10). If path angle \( \Omega =\raisebox{1ex}{$i$}\!\left/ \!\raisebox{-1ex}{${L}_r$}\right. \), i ∈ [1, ⋯, N], then in the spatial channel, only the (i + 1)th element is nonzero and there is not energy leakage; otherwise \( \Omega =\raisebox{1ex}{$k$}\!\left/ \!\raisebox{-1ex}{${L}_r$}\right.+{\Omega}^{\prime } \), k ∈ [1, ⋯, N] and \( \left|{\Omega}^{\prime}\right|\le \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2{L}_r$}\right. \), then there is energy leakage to the neighborhood elements in the spatial channel, and at most a percentage of \( \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$\pi \left(\Delta k-1\right)$}\right. \) of the energy is outside a region comprised by [k − ∆k, ⋯, k, ⋯, k + ∆k ] in the spatial channel.
Proof: For \( \Omega =\raisebox{1ex}{$i$}\!\left/ \!\raisebox{-1ex}{${L}_r$}\right. \), there is no energy leakage which has been discussed in the previous section. Now, we consider \( \Omega =\raisebox{1ex}{$k$}\!\left/ \!\raisebox{-1ex}{${L}_r$}\right.+{\Omega}^{\prime } \) and analyze the energy leakage to the (k′ + 1)th atom in the channel basis. We define ∆Ω = Ω − k′/L r = Ω′ + k/L r − k′/L r = Ω′ + Δk/L r , Δk = k − k′, and we have
$$ {\displaystyle \begin{array}{c}{f}_r\left({\Delta}_{\Omega}\right)={f}_r\left(\Omega -{k}^{\prime }/{L}_r\right)={\boldsymbol{e}}_{\boldsymbol{r}}{\left(\Omega \right)}^H{\boldsymbol{e}}_{\boldsymbol{r}}\left({k}^{\prime }/{L}_r\right)\\ {}=\frac{1}{N_r}\exp \left( j\pi {\Delta}_r{\Delta}_{\Omega}\left({N}_r-1\right)\right)\frac{\sin \left({N}_r\pi {\Delta}_r{\Delta}_{\Omega}\right)}{\sin \left(\pi {\Delta}_r{\Delta}_{\Omega}\right)}\end{array}} $$
The absolute value of f r (∆Ω) is given by
$$ \left|{f}_r\left({\Delta }_{\Omega}\right)\right|=\left|{f}_r\left({\Omega}^{\prime }+\Delta {k}^{\prime }/{L}_r\right)\right|=\frac{1}{N_r}\left|\frac{\sin \left(\pi {L}_r\left({\Omega}^{\prime }+\Delta {k}^{\prime }/{L}_r\right)\right)}{\sin \left(\pi {L}_r\left({\Omega}^{\prime }+\Delta {k}^{\prime }/{L}_r\right)/{N}_r\right)}\right| $$
Let \( {\Gamma}_{k^{\prime }+1}^2={\left|{f}_r\left({\Omega}^{\prime }+\Delta {k}^{\prime }/{L}_r\right)\right|}^2 \), we have
$$ {\Gamma}_{k^{\prime }+1}^2=\frac{1}{N_r^2}\frac{\sin^2\left(\pi {L}_r\left({\Omega}^{\prime }+\Delta {k}^{\prime }/{L}_r\right)\right)}{\sin^2\left(\pi {L}_r\left({\Omega}^{\prime }+\Delta {k}^{\prime }/{L}_r\right)/{N}_r\right)} $$
By utilizing the cyclic symmetry of \( {\Gamma}_{k^{\prime }+1}^2 \), the sum leakage energy which is outside the region comprised by [k − ∆k, ⋯, k, ⋯, k + ∆k ] in the spatial channel are given by
$$ {\displaystyle \begin{array}{c}{\Gamma}_{\Delta k}^2+\dots +{\Gamma}_{N_{r-\Delta k}}^2=\frac{2}{N_r^2}\sum \limits_{i=\Delta k}^{N_r/2}\frac{\sin^2\left(\pi {L}_r\left({\Omega}^{\prime }+i/{L}_r\right)\right)}{\sin^2\left(\frac{\pi {L}_r\left({\Omega}^{\prime }+i/{L}_r\right)}{N_r}\right)}\\ {}\le \frac{2}{N_r^2}\sum \limits_{i=k}^{N_r/2}\frac{1}{\sin^2\left(\frac{\pi {L}_r\left({\Omega}^{\prime }+i/{L}_r\right)}{N_r}\right)}\\ {}\le \frac{2}{N_r^2}{\int}_{\Delta k-1}^{N_r/2}\frac{di}{\sin^2\left(\frac{\pi {L}_ri}{N_r}\right)}\\ {}=\frac{2}{\pi {N}_r}\left(\cot \left(\pi \left(\Delta k-1\right)/{N}_r\right)\right)\\ {}\le \frac{1}{\pi \left(\Delta k-1\right)}\end{array}} $$
Since \( \sum \limits_{i=1}^N{\Gamma}_i^2=1 \) by the Parseval theory, property 2 is proofed. ■
From property 2, it can be seen that although there exists energy leakage because of basis mismatch, most of the energy is allocated around the support corresponding to the angle direction, and the leakage energy decreases in the rate of \( \frac{1}{\pi \left(\Delta k-1\right)} \).
To illustrate these properties, we assume that the antenna number is 100; there are only three dominant paths. In Fig. 2, it can be seen that the channel basis index or so-called the channel taps in the spatial domain are located in cluster. If the channel basis matches the channel angle perfectly, the sparsity is relatively good such as path 1 in Fig. 2a; otherwise, the sparsity deteriorates such as path 2 and path 3. Meanwhile, if the angles of two paths are close, the amplitudes of the channel taps in spatial domain for them are superpositioned because of the leakage effects. In the ideal assumption, the AOA and DOA for uplink channel and downlink channel are common, but in the practical scenario, they are not the same. We assume the angle deviation for one path in uplink and downlink is within 5° randomly as [20, 24]; then, as shown in Fig. 2b, it can be seen that the channel taps or channel supports for uplink and downlink in the spatial domain change slightly and are partially common as property 1. In the following, we will utilize these properties for downlink spatial channel support diagnosis.
4 Proposed methods
In this section, the support diagnosis algorithm is first proposed based on the reciprocity in angular domain for uplink and downlink channels; then, the downlink massive MIMO channel estimation algorithm based on the diagnosed support is proposed.
4.1 Channel support diagnosis algorithm
In this subsection, we propose the channel support diagnosis algorithm based on the analysis of basis mismatch and support difference between the downlink channel and uplink channel. Although the AOA and DOA for uplink and downlink are not exactly the same, the multipath number is common for uplink and downlink. Inspired by the clustering property of channel support in spatial domain, for example, there are three clusters in Fig. 2b, we apply the DBSCAN (density-based spatial clustering of applications with noise) algorithm to extract the multipath information in the spatial domain. Then, we infer the probable channel supports of downlink channel. The details of the support diagnosis algorithm are presented in Algorithm 1.
In step S1, the estimated uplink spatial channel is utilized for path number estimation by DBSCAN algorithm. The DBSCAN algorithm is a data clustering algorithm proposed by Martin Ester which is widely used in machine learning [25]. It is a density-based clustering algorithm in which it groups together points that are closely allocated and marks those as outlier points which lie in low-density regions. In DBSCAN algorithm, the points are classified as core points, density reachable points, and outliers. A point p is a core point if at least minPts points are within distance ε, and ε is the maximum radius of the neighborhood from p. A point q is reachable from p if there is a path p1, ..., p n with p1 = p and p n = q, where each pi + 1 is directly reachable from p i . If p is a core point, then it forms a cluster together with all points that are reachable from it. For example, in Fig. 4, p3 is reachable from p1, and p1, p2, p3, and p4 are comprised as cluster 2. The detailed algorithm can be found in [26].
By step S1, we can capture the multipaths in most conditions by DBSCAN except if the channel basis matches the path angle well or the leakage channel taps are not selected, for example, only one channel tap alone for the path is selected such as path 1 in Fig. 2a; then, by DBSCAN algorithm, this alone channel tap will be omitted because it is in the low-density region such as point p n in Fig. 3 which is the inherent characteristic of DBSCAN. In order to avoid this situation, we add these channel supports manually in step S2. In this way, we can capture most of the channel supports and get the multipath information.
Illustration of DBSCAN algorithm
In the spatial channel, each path is corresponding to one cluster set. If the channel basis matches the path angle well, then there is no energy leakage, and the corresponding cluster set has only one element; otherwise, there are multiple elements in the cluster set. In step S3, we estimate the central support for each path by averaging the index value of each cluster set. For example, in Fig. 2a, the central support is 44 for path 2.
In step S4, we consider how to get the probable support of the downlink channel from the uplink channel support. If we consider the angle deviation between uplink and downlink, the change of channel tap index is within about \( \pm {L}_r\sqrt{2-2\mathit{\cos}\Delta} \) as property 1. If the antenna number is 100 and antennas are half-wave allocated, then L r = 50. If the angle deviation is about 5°, the channel index deviation is about ± 4.36. From the leakage effect analysis in the previous section, at most a percentage of \( \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$\pi \left(\Delta m-1\right)$}\right. \) of the energy is located outside a rectangular neighborhood of the central channel support with interval of ±∆m. For example, if we consider about 90% energy, then ∆m = 4.
We can diagnose the downlink channel supports in step S4 based on the uplink channel support information. The diagnosis procedure is as follows: firstly, we find the multipath number and the center support index p i of each path in the spatial domain; then, taking the angle deviations into account, it can be inferred that the center support index of each path change is within the range of \( \left[{p}_i-{L}_r\sqrt{2-2\mathit{\cos}\Delta},{p}_i+{L}_r\sqrt{2-2\mathit{\cos}\Delta}\right] \); following, taking the basis mismatch effect and 90% energy criterion into consideration, the probable support index is within the range of \( \left[{p}_i-{L}_r\sqrt{2-2\mathit{\cos}\Delta}-\Delta m,\cdots, {p}_i+{L}_r\sqrt{2-2\mathit{\cos}\Delta}+\Delta m\right] \); then, we can get the probable downlink channel support set.
4.2 Compressive downlink channel estimation with support priori
Firstly, the BS utilizes CS recovery algorithm for the uplink channel estimation, and UT feeds back the received pilot signal to the BS. At the BS, the channel support diagnosis procedure is carried out as Algorithm 1. In order to integrate the support diagnosis information of downlink channel into the CS algorithm, we assign a weighting matrix W = diag([w1, ⋯, w N ]) based on the diagnosed support information. The weights are given by
$$ w(i)=\Big\{{\displaystyle \begin{array}{c}{w}_1=1,\mathrm{if}\ i\in {\Omega}^d\\ {}{w}_2=\upsigma, \mathrm{if}\ i\notin {\Omega}^d\end{array}} $$
where 0 < σ < 1 is the penalty parameter and Ω d is the estimated downlink spatial channel support set. In the paper, we merge the weighting matrix into the subspace pursuit (SP) algorithm, and the details for the modified SP with support priori are presented in Algorithm 2. Compared with the standard SP, the main difference is the candidate support selection in step S1. In the standard SP, the n′ largest magnitude entries are selected in the vector A H Vk − 1, while in the proposed weighted SP algorithm, the weighting matrix W takes into consideration which contains the priori information from uplink channel. It should be noted that the diagnosed support information can be merged into other CS recovery algorithms equivalently.
5 RIP-based performance analyses
In this section, we study the RIP-based performance of the proposed algorithm. The basic idea for the analysis mainly stems from Dai and Milenkovic in [27] but is with the consideration of the weights in (23). Compared with the standard SP algorithm, the main difference in the proposed algorithm is the support updating step (S1) in Algorithm 2. In the following, for brevity y d is denoted as y and \( {\boldsymbol{h}}_a^d \) is denoted as h.
The following analysis uses several propositions from [27]. We bring these in this subsection first to keep the analysis complete. According to the definition of residue vector of Vk − 1, we can get
$$ {\displaystyle \begin{array}{c}{\boldsymbol{V}}_{k-1}= resid\left(\boldsymbol{y},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)\\ {}= resid\left({\boldsymbol{A}}_{T-{\Gamma}^{k-1}}{\boldsymbol{h}}_{T-{\Gamma}^{k-1}}+{\boldsymbol{A}}_{T\cap {\Gamma}^{k-1}}{\boldsymbol{h}}_{T\cap {\Gamma}^{k-1}}+\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)\\ {}= resid\left({\boldsymbol{A}}_{T-{\Gamma}^{k-1}}{\boldsymbol{h}}_{T-{\Gamma}^{k-1}},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)+ resid\left({\boldsymbol{A}}_{T\cap {\Gamma}^{k-1}}{\boldsymbol{h}}_{T\cap {\Gamma}^{k-1}},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)+ resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)\\ {}= resid\left({\boldsymbol{A}}_{T-{\Gamma}^{k-1}}{\boldsymbol{h}}_{T-{\Gamma}^{k-1}},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)+ resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)\\ {}={\boldsymbol{A}}_{T-{\Gamma}^{k-1}}{\boldsymbol{h}}_{T-{\Gamma}^{k-1}}+{\boldsymbol{A}}_{\Gamma^{k-1}}{\boldsymbol{h}}_{p,{\Gamma}^{k-1}}+ resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)\end{array}} $$
where T is the actual support of h, \( {\boldsymbol{h}}_{p,{\Gamma}^{k-1}}=-{\left({\boldsymbol{A}}_{\Gamma^{k-1}}^H{\boldsymbol{A}}_{\Gamma^{k-1}}\right)}^{-1}{\boldsymbol{A}}_{\Gamma^{k-1}}^H{\boldsymbol{A}}_{T-{\Gamma}^{k-1}}{\boldsymbol{h}}_{T-{\Gamma}^{k-1}} \). The fourth line in (24) is according to the definition of residual. The fifth line in (24) is according to the definition of projection. One can write
$$ {\boldsymbol{V}}_{k-1}={\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}+ resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)=\left[{\boldsymbol{A}}_{T-{\Gamma}^{k-1}},{\boldsymbol{A}}_{\Gamma^{k-1}}\right]\left[\begin{array}{c}{\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\\ {}{\boldsymbol{h}}_{p,{\Gamma}^{k-1}}\end{array}\right]+ resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right). $$
Proposition 1 [(25) in [27]]: \( {\left\Vert {\boldsymbol{h}}_{p,{\Gamma}^{k-1}}\right\Vert}_2\le \frac{\delta_{2k}}{1-{\delta}_{2k}}{\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2 \), where T is the correct support set and Γk − 1 is the estimated support during the (k − 1)th iteration in step (S3) of Algorithm 2.
Proposition 2 [(16) in [27]]: \( {\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^k}\right\Vert}_2\le \frac{1+{\delta}_{3k}}{1-{\delta}_{3k}}{\left\Vert {\boldsymbol{h}}_{T-{\overset{\sim }{\Gamma}}^k}\right\Vert}_2+\frac{2}{1-{\delta}_{3k}}{\left\Vert \boldsymbol{n}\right\Vert}_2 \), where T is the actual support set and \( {\overset{\sim }{\Gamma}}^k \) and Γ k are the support sets during the kth iteration in steps S1 and S3 of Algorithm 2, respectively.
In step S4 of Algorithm 2, the residue vector is calculated after the new support set is obtained in step S2. The residue vector is shown as (24). Proposition 1 and 2 give out the inequalities that \( {\boldsymbol{h}}_{T-{\Gamma}^{k-1}} \) and \( {\boldsymbol{h}}_{p,{\Gamma}^{k-1}} \) in (24) will satisfy. Since in our proposed algorithm only step S1 is different from the standard SP, propositions 1 and 2 can be applied directly. Then, we will consider the impacts of step S1 only on the estimation error.
Theorem 3: By using the weighing matrix W in (23), the proposed SP solution at the lth iteration satisfies the inequality as
$$ {\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^k}\right\Vert}_2\le \sqrt{\frac{w_1^2+{w}_2^2}{w_1^2}}\frac{\delta_{3k}\left(1+{\delta}_{3k}\right)}{{\left(1-{\delta}_{3k}\right)}^3}{\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2+\frac{4\left(1+{\delta}_{3k}\right)}{{\left(1-{\delta}_{3k}\right)}^2}{\left\Vert \boldsymbol{n}\right\Vert}_2. $$
Proof: According to the definition of Ω in S1 in Algorithm 2, then
$$ {\left\Vert {\boldsymbol{W}}_{\Omega}{\boldsymbol{A}}_{\Omega}^H{\boldsymbol{V}}_{k-1}\right\Vert}_2\ge {\left\Vert {\boldsymbol{W}}_{\mathrm{T}}{\boldsymbol{A}}_T^H{\boldsymbol{V}}_{k-1}\right\Vert}_2 $$
Removing the common columns between Ω and T on both sides of (25) and applying \( {\left\Vert resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)\right\Vert}_2\le {\left\Vert \boldsymbol{n}\right\Vert}_2 \) from the definition of residue operation, we arrive at
$$ {\displaystyle \begin{array}{c}{\left\Vert {\boldsymbol{W}}_{\Omega -\mathrm{T}}{\boldsymbol{A}}_{\Omega -\mathrm{T}}^H{\boldsymbol{V}}_{k-1}\right\Vert}_2\ge {\left\Vert {\boldsymbol{W}}_{\mathrm{T}-\Omega}{\boldsymbol{A}}_{T-\Omega}^H{\boldsymbol{V}}_{k-1}\right\Vert}_2\\ {}\ge {\left\Vert {\boldsymbol{W}}_{\mathrm{T}-\Omega}{\boldsymbol{A}}_{\mathrm{T}-\Omega}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2-{\left\Vert {\boldsymbol{W}}_{\mathrm{T}-\Omega}{\boldsymbol{A}}_{\mathrm{T}-\Omega}^H resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)\right\Vert}_2\\ {}\ge {\left\Vert {\boldsymbol{W}}_{\mathrm{T}-\Omega}{\boldsymbol{A}}_{\mathrm{T}-\Omega}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2-{w}_1\sqrt{1+{\delta}_k}{\left\Vert resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)\right\Vert}_2\\ {}\ge {\left\Vert {\boldsymbol{W}}_{\mathrm{T}-\Omega}{\boldsymbol{A}}_{\mathrm{T}-\Omega}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2-{w}_1\sqrt{1+{\delta}_k}{\left\Vert \boldsymbol{n}\right\Vert}_2\end{array}} $$
The second line of (26) is from the triangle inequality and the expression of Vk − 1. The third line is from the definition of W in (23) and RIP property. On the other hand, by applying the triangle inequality, we have
$$ {\displaystyle \begin{array}{c}{\left\Vert {\boldsymbol{W}}_{\Omega -\mathrm{T}}{\boldsymbol{A}}_{\Omega -\mathrm{T}}^H{\boldsymbol{V}}_{k-1}\right\Vert}_2\le {\left\Vert {\boldsymbol{W}}_{\Omega -\mathrm{T}}{\boldsymbol{A}}_{\Omega -\mathrm{T}}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2+{\left\Vert {\boldsymbol{W}}_{\Omega -\mathrm{T}}{\boldsymbol{A}}_{\Omega -\mathrm{T}}^H resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)\right\Vert}_2\\ {}\le {\left\Vert {\boldsymbol{W}}_{\Omega -\mathrm{T}}{\boldsymbol{A}}_{\Omega -\mathrm{T}}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2+{w}_1\sqrt{1+{\delta}_k}{\left\Vert \boldsymbol{n}\right\Vert}_2\end{array}} $$
Combining (26) and (27), we have
$$ {\left\Vert {\boldsymbol{W}}_{\Omega -\mathrm{T}}{\boldsymbol{A}}_{\Omega -\mathrm{T}}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2+2{w}_1\sqrt{1+{\delta}_k}{\left\Vert \boldsymbol{n}\right\Vert}_2\ge {\left\Vert {\boldsymbol{W}}_{\mathrm{T}-\Omega}{\boldsymbol{A}}_{\mathrm{T}-\Omega}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2 $$
Then, we have
$$ {\displaystyle \begin{array}{c}{\left\Vert {\boldsymbol{W}}_{\Omega -\mathrm{T}}{\boldsymbol{A}}_{\Omega -\mathrm{T}}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2^2={\left\Vert {\boldsymbol{W}}_{\left(\Omega -\mathrm{T}\right)\cap \widehat{T}}{\boldsymbol{A}}_{\left(\Omega -\mathrm{T}\right)\cap \widehat{T}}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2^2+{\left\Vert {\boldsymbol{W}}_{\left(\Omega -\mathrm{T}\right)-\widehat{T}}{\boldsymbol{A}}_{\left(\Omega -\mathrm{T}\right)-\widehat{T}}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2^2\\ {}={w}_1^2{\left\Vert {\boldsymbol{A}}_{\left(\Omega -\mathrm{T}\right)\cap \widehat{T}}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2^2+{w}_2^2{\left\Vert {\boldsymbol{A}}_{\left(\Omega -\mathrm{T}\right)-\widehat{T}}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2^2\\ {}\le {w}_1^2{\delta}_{3k}^2{\left\Vert {\boldsymbol{h}}_{r,k-1}\right\Vert}_2^2+{w}_2^2{\delta}_{3k}^2{\left\Vert {\boldsymbol{h}}_{r,k-1}\right\Vert}_2^2\\ {}=\left({w}_1^2+{w}_2^2\right)\ {\delta}_{3k}^2{\left\Vert {\boldsymbol{h}}_{r,k-1}\right\Vert}_2^2\end{array}} $$
where \( \widehat{T} \) is the diagnosed support set and the weight is given by (23). Similarly, we have
$$ {\displaystyle \begin{array}{c}{\left\Vert {\boldsymbol{W}}_{\mathrm{T}-\Omega}{\boldsymbol{A}}_{\mathrm{T}-\Omega}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2^2\ge {\left\Vert {\boldsymbol{W}}_{\mathrm{T}-\left(\Omega +{\Gamma}^{k-1}\right)}{\boldsymbol{A}}_{\mathrm{T}-\left(\Omega +{\Gamma}^{k-1}\right)}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}\right\Vert}_2^2\\ {}={\left\Vert {\boldsymbol{W}}_{\mathrm{T}-{\tilde{\Gamma}}^k}{\boldsymbol{A}}_{\mathrm{T}-{\tilde{\Gamma}}^k}^H{\boldsymbol{A}}_{\mathrm{T}-{\tilde{\Gamma}}^k}{\left({\boldsymbol{h}}_{r,k-1}\right)}_{\mathrm{T}-{\tilde{\Gamma}}^k}\right\Vert}_2^2+{\left\Vert {\boldsymbol{W}}_{\mathrm{T}-{\tilde{\Gamma}}^k}{\boldsymbol{A}}_{\mathrm{T}-{\tilde{\Gamma}}^k}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}-\left(\mathrm{T}-{\tilde{\Gamma}}^k\right)}{\left({\boldsymbol{h}}_{r,k-1}\right)}_{T\cup {\Gamma}^{k-1}-\left(\mathrm{T}-{\tilde{\Gamma}}^k\right)}\right\Vert}_2^2\\ {}\ge {w}_1^2{\left(1-{\delta}_k\right)}^2{\left\Vert {\left({\boldsymbol{h}}_{r,k-1}\right)}_{\mathrm{T}-{\tilde{\Gamma}}^k}\right\Vert}_2^2+{\left\Vert {\boldsymbol{W}}_{\mathrm{T}-{\tilde{\Gamma}}^k}{\boldsymbol{A}}_{\mathrm{T}-{\tilde{\Gamma}}^k}^H{\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}-\left(\mathrm{T}-{\tilde{\Gamma}}^k\right)}{\left({\boldsymbol{h}}_{r,k-1}\right)}_{T\cup {\Gamma}^{k-1}-\left(\mathrm{T}-{\tilde{\Gamma}}^k\right)}\right\Vert}_2^2\\ {}\ge {w}_1^2{\left(1-{\delta}_k\right)}^2{\left\Vert {\left({\boldsymbol{h}}_{r,k-1}\right)}_{\mathrm{T}-{\tilde{\Gamma}}^k}\right\Vert}_2^2\end{array}} $$
where the second line in (30) is according to step S1 in Algorithm 2 and the third line is according to Algorithm 1 and RIP properties.
Combining (29) and (30) into (28), we can get
$$ \sqrt{w_1^2+{w}_2^2}{\delta}_{3k}{\left\Vert {\boldsymbol{h}}_{r,k-1}\right\Vert}_2+2{w}_1\sqrt{1+{\delta}_k}{\left\Vert \boldsymbol{n}\right\Vert}_2\ge {w}_1\left(1-{\delta}_k\right){\left\Vert {\left({\boldsymbol{h}}_{r,k-1}\right)}_{\mathrm{T}-\left(\Omega +\Gamma \right)}\right\Vert}_2 $$
Noting the explicit form of ‖hk − 1‖2 and applying the triangle inequality, one has
$$ {\displaystyle \begin{array}{c}{\left\Vert {\boldsymbol{h}}_{r,k-1}\right\Vert}_2\le {\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2+{\left\Vert {\boldsymbol{h}}_{p,{\Gamma}^{k-1}}\right\Vert}_2\\ {}\le \left(1+\frac{\delta_{2k}}{1-{\delta}_{2k}}\right){\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2\\ {}\le \frac{1}{1-{\delta}_{3k}}{\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2\end{array}} $$
where the second line in (32) is obtained by using proposition 1. Note that \( {\boldsymbol{V}}_{k-1}={\boldsymbol{A}}_{T\cup {\Gamma}^{k-1}}{\boldsymbol{h}}_{r,k-1}+ resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right)=\left[{\boldsymbol{A}}_{T-{\Gamma}^{k-1}}{\boldsymbol{A}}_{\Gamma^{k-1}}\right]\left[\begin{array}{c}{\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\\ {}{\boldsymbol{h}}_{p,{\Gamma}^{k-1}}\end{array}\right]+ resid\left(\boldsymbol{n},{\boldsymbol{A}}_{\Gamma^{k-1}}\right),\mathrm{then} \)\( {\left({\boldsymbol{h}}_{r,k-1}\right)}_{\mathrm{T}-\left(\Omega +\Gamma \right)}={\boldsymbol{h}}_{T-{\Gamma}^k} \). Then, we get
$$ {\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2\le \sqrt{\frac{w_1^2+{w}_2^2}{w_1^2}}\frac{\delta_{3k}}{{\left(1-{\delta}_{3k}\right)}^2}{\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2+\frac{2\sqrt{1+{\delta}_k}}{1-{\delta}_k}{\left\Vert \boldsymbol{n}\right\Vert}_2 $$
Combining proposition 2 with (33), we complete the proof. ■
In the standard SP algorithm as shown in [27], at the lth iteration it satisfies
$$ {\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^k}\right\Vert}_2\le \frac{2{\delta}_{3k}\left(1+{\delta}_{3k}\right)}{{\left(1-{\delta}_{3k}\right)}^3}{\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2+\frac{4\left(1+{\delta}_{3k}\right)}{{\left(1-{\delta}_{3k}\right)}^2}{\left\Vert \boldsymbol{n}\right\Vert}_2 $$
As in the weighting matrix (23), \( \sqrt{\frac{w_1^2+{w}_2^2}{w_1^2}}<\sqrt{2} \), so the proposed algorithm can converge faster than the standard SP and have better estimation accuracy. When δ3k = 0.083 as required in [27] and assuming w1 = w2 = 1 for the worst case, in our proposed algorithm, we have \( {\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^k}\right\Vert}_2\le 0.164{\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2+5.152{\left\Vert n\right\Vert}_2 \) while in the standard SP \( {\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^k}\right\Vert}_2\le 0.232{\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2+5.152{\left\Vert n\right\Vert}_2 \). When σ = 0.1 in the proposed algorithm, we have \( {\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^k}\right\Vert}_2\le 0.117{\left\Vert {\boldsymbol{h}}_{T-{\Gamma}^{k-1}}\right\Vert}_2+5.152{\left\Vert n\right\Vert}_2 \). In this way, our proposed algorithm converges faster than the standard SP and has better recovery performance. In other words, the restriction of δ3k is weakened in the proposed algorithm. The performance improvement can be explained by that the SP algorithm can benefit from the priori support information.
In this section, simulations are carried out to evaluate the performance of the proposed downlink channel estimations with priori information. The BS is equipped with 100 antennas, and the UT is equipped with a single antenna. The channel model is generated according to spatial MIMO channel in 3GPP TS36.900. The same SNR is assumed for both uplink and downlink. Since we focus on the impacts of the priori information on the performance of downlink channel estimation, the gain of multiple pilots in uplink channel is not discussed, and the pilot length is set to 1. The estimation accuracy of uplink channel can benefit from the increase of pilot number.
In Fig. 4, the penalty value σ in the weighting matrix is 0.1, the downlink pilot length is 50, and the SNR for uplink and downlink are equal. We compare the performance of downlink channel estimation among four algorithms ((1)SP algorithms with no priori information as done in [4, 14], (2) weighted SP with the same supports as uplink as in [18], (3) weighted SP with the proposed diagnosed supports, and (4) standard iteratively reweighted least squares (IRLS) in [28]). It can be seen that in the low-SNR region algorithms, (1), (2), and (3) have almost the same MSE performance; the performance of algorithm (4) fluctuates around them. However, in the high-SNR region, the proposed algorithm has better performance, which can be explained by that the support information from uplink channel is more accurate in the high-SNR region since we assume the common SNR for uplink and downlink. The proposed support diagnosis information can improve the channel estimation MSE because it considers the basis mismatch, the leakage effect, and the slight deviation of AOA and DOA for uplink and downlink.
MSE performance of the weighted SP recovery based on the diagnosed supports
In Fig. 5, the penalty value σ in the weighting matrix is 0.1. We compare the performance of the downlink channel estimation with different pilot lengths. It can be seen that with the increase of the downlink pilot length, the MSE performance for the four algorithms ((1) SP algorithms with no priori information as done in [4, 14], (2) weighted SP with the same supports as uplink in [18], (3) weighted SP with the proposed diagnosed supports, and (4) standard iteratively reweighted least squares (IRLS) in [28]) are all improved which are in accordance with the results in the CS theory. The proposed recovery algorithm with diagnosed support has the best recovery performance and has higher MSE improvement gain by pilot increasing in the high-SNR region.
Weighted SP recovery based on the diagnosed supports with different pilot lengths
In Fig. 6, we compare the performance of downlink channel estimation with different penalty values in weighting matrix in the proposed support diagnosis algorithm. It can be seen that weighting matrix with smaller penalty value in the proposed algorithm has better performance especially in the high-SNR region.
Weighted SP recovery based on the diagnosed supports with different penalty values
From the simulation results, above it can be concluded that (1) the uplink channel can offer support information for downlink channel estimation. The support information is useful for the downlink channel recovery. (2) The assumption of common support for downlink and uplink channels is not practical. The estimation accuracy will deteriorate if the support difference is ignored. The proposed support diagnosis algorithm considers the basis mismatch and angle deviation for uplink and downlink which can improve the downlink channel estimation further. (3) The weighting matrix value is important for the proposed weighted SP algorithm; smaller penalty value in weighting matrix is preferred especially in the high-SNR region. In brief, the proposed support diagnosis algorithm is sufficient and beneficial for the downlink compressive channel estimation.
In this paper, we propose a downlink compressive channel estimation based on weighted SP for FDD massive MIMO systems, and the weighted SP makes use of the priori support information to improve the estimation performance. The reciprocity between uplink and downlink channels in angular domain is used for diagnosing the support priori information of downlink channel. The proposed support diagnosis algorithm considers the basis mismatch, leakage effect, and angle deviation for uplink and downlink channels and applies the DBSCAN algorithm used in machine learning to the channel support diagnosis. RIP-based analysis shows a better convergence and error performance of the proposed algorithm compared with the standard SP. Simulation results verify that the proposed algorithm improves the downlink channel estimation accuracy compared to the IRLS, the SP algorithm which does not utilize the uplink priori information, and the weighted SP which assumes that the downlink and uplink channels have common supports.
AOA:
Angle of arrival
AOD:
Angle of departure
BS:
CS:
CSI:
Channel state information
DBSCAN:
Density-based spatial clustering of applications with noise
FDD:
Frequency division duplexing
IRLS:
Iteratively reweighted least squares
Long-term evolution
Modified basis pursuit
MIMO:
Multiple-input multiple-output
MU:
Multiple user
OMP:
Orthogonal matching pursuit
Restricted isometry property
SP:
Subspace pursuit
Single user
TDD:
Time division duplexing
UT:
User terminal
This work is supported in part by the National Science Foundation of China (No.61601509) and the China Postdoctoral Science Foundation Grant (No.2016M603045). Thanks for the reviewers' comments.
The National Science Foundation of China (Grant No.61601509) and the China Postdoctoral Science Foundation (Grant No.2016M603045) are supporting the simulations and data analyses.
All authors discussed the experiments; WL performed the experiments and wrote the paper. YW, QF and SP have made some useful comments on the paper. All authors have read and approved the final manuscript.
Wei Lu received a Ph.D degree in communications and information system from Huazhong University of Science and Technology, China, in 2013. Now, he is a lecturer in the Air Force Early Warning Academy in China. His current research interests focus on compressed sensing and signal processing. Yongliang Wang received a Ph.D degree in signal processing from Xidian University, China, in 1994. Now, he is a professor in the Air Force Early Warning Academy in China. His current research interests focus on STAP. Qiqing Fang is an associate professor in the Air Force Early Warning Academy in China. His current research interests focus on operations research and management. Shixin Peng received a Ph.D degree in communications and information system from Huazhong University of Science and Technology, China, in 2015. Now, he is a postdoctor in Central China Normal University. His current research interests focus on wireless communications and signal processing.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Air Force Early Warning Academy, Wuhan, China
National Engineering Research Centre for E-Learning, Central China Normal University, Wuhan, China
X Liu, F Li, ZY Na, Optimal resource allocation in simultaneous cooperative spectrum sensing and energy harvesting for multichannel cognitive radio. IEEE Access 5, 3801–3812 (2017)View ArticleGoogle Scholar
G Wunder, H Boche, T Strohme, P Jung, Sparse signal processing concepts for efficient 5G system design. IEEE Access 3, 195–208 (2015)View ArticleGoogle Scholar
L Lu, GY Li, AL Swindlehurst, A Ashikhmin, R Zhang, An overview of massive MIMO: benefits and challenges. IEEE J. Sel. Topics Signal Process. 8(5), 742–758 (2014)View ArticleGoogle Scholar
X Rao, VKN Lau, Distributed compressive CSIT estimation and feedback for FDD multi-user massive MIMO systems. IEEE Trans. Signal Process. 62(12), 3261–3271 (2014)MathSciNetView ArticleGoogle Scholar
N Vaswani, W Lu, Modified-CS: modifying compressive sensing for problems with partially known support. IEEE Trans. Signal Process. 58(9), 4595–4607 (2010)MathSciNetView ArticleGoogle Scholar
M Masood, LH Afify, TY Al-Naffouri, Efficient coordinated recovery of sparse channels in massive MIMO. IEEE Trans. Signal Process. 63(1), 104–118 (2015)MathSciNetView ArticleGoogle Scholar
Y Nan, L Zhang, X Sun, Weighted compressive sensing based uplink channel estimation for time division duplex massive multi-input multi-output systems. IET Commun. 11(3), 355–361 (2017)View ArticleGoogle Scholar
Y Nan, L Zhang, X Sun, Efficient downlink channel estimation scheme based on block-structured compressive sensing for TDD massive MU-MIMO systems. IEEE Wireless Commun. Lett. 4(4), 345–348 (2015)View ArticleGoogle Scholar
X Rao, V Lau, Compressive sensing with priori support quality information and application to massive MIMO channel estimation with temporal correlation. IEEE Trans. Signal Process. 63(18), 4914–4924 (2015)MathSciNetView ArticleGoogle Scholar
A Liu, VKN Lau, W Dai, Exploiting burst-sparsity in massive MIMO with partial channel support information. IEEE Trans. Wirel. Commun. 15(11), 7820–7830 (2016)View ArticleGoogle Scholar
YH Han, JW Lee, DJ Love, Compressed sensing-aided downlink channel training for FDD massive MIMO systems. IEEE Trans. Commun. 65(7), 2852–2862 (2017)View ArticleGoogle Scholar
JC Shen, J Zhang, E Alsusa, KB Letaief, Compressed CSI acquisition in FDD massive MIMO: how much training is needed? IEEE Trans. Wirel. Commun. 15(6), 4145–4156 (2016)View ArticleGoogle Scholar
A Liu, F Zhu, VKN Lau, Closed-loop autonomous pilot and compressive CSIT feedback resource adaptation in multi-user FDD massive MIMO systems. IEEE Trans. Signal Process. 65(1), 173–183 (2017)MathSciNetView ArticleGoogle Scholar
CC Tseng, J Y Wu, T S Lee, 2016 IEEE 27th Annual IEEE International Symposium on Personal, Indoor and Mobile Radio Communications. Compressive downlink CSI estimation for FDD massive MIMO systems: a weighted block L1-minimization approach (2016)Google Scholar
X Cheng, J Sun, S Li, Channel estimation for FDD multi-user massive MIMO: a variational Bayesian inference-based approach. IEEE Trans. Wirel. Commun. 16(11), 7590–7602 (2017)View ArticleGoogle Scholar
K Hugl, K Kalliola, J Laurila, in Proc. COST 273 Technical Document TD. Spatial reciprocity of uplink and downlink radio channel in FDD systems, vol 66 (2002), p. 7Google Scholar
U Ugurlu, R Wichman, CB Ribeiro, C Wijting, A multipath extraction-based CSI acquisition method for FDD cellular networks with massive antenna arrays. IEEE Trans. Wirel. Commun. 15(4), 2940–2953 (2016)View ArticleGoogle Scholar
Y Ding, B D Rao, in IEEE Global Conference on Signal and Information Processing, Channel estimation using joint dictionary learning in FDD massive MIMO system(2015), 185-189Google Scholar
G TaubÖck, F Hlawatsch, D Eiwen, H Rauhut, Compressive estimation of doubly selective channels in multicarrier systems: Leakage effects and sparsity-enhancing processing. IEEE J. Sel. Topics Signal Process. 4(2), 255–271 (2010)View ArticleGoogle Scholar
Universal Mobile Telecommunications System(UMTS): spatial channel model for Multiple Input Multiple Output(MIMO) simulations (3GPP, TS 36.900 Release 14) (2017), http://www.3gpp.org
A Liu, FB Zhu, VKN Lau, Close-loop autonomous pilot and compressive CSIT feedback resource adaption in multi-user FDD massive MIMO systems. IEEE Trans. Signal Process. 65(1), 173–183 (2017)MathSciNetView ArticleGoogle Scholar
W Lu, Y Z Liu, D S Wang, 2011 2nd International Conference on Wireless VITAE, Compressed sensing in spatial MIMO channels (2011)Google Scholar
G Xu and H Liu, Proc. Int. Conf. Acoust. Speech Signal Process., An effective transmission beamforming scheme for frequency-division-duplex digital wireless communication systems (1995), 1729–1732Google Scholar
AF Molisch, A Kuchar, J Laaurila, et al., Geometry-based directional model for mobile radio channels principles and implementation. Eur. Trans. Telecommun. 14(4), 351–359 (2003)View ArticleGoogle Scholar
M Ester, H P Kriegel, J Sander, X Xu, et al., Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96). A density-based algorithm for discovering clusters in large spatial data basis with noise (1996), 226–231Google Scholar
E Schubert, J Sander, M Ester, et al., DBSCAN revisited, revisited: why and how you should (still) use DBSCAN. ACM Trans. Database Syst. 42(3), 1–21 (2017)MathSciNetView ArticleGoogle Scholar
W Dai, O Milenkovic, Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 55(5), 2230–2249 (2009)MathSciNetView ArticleMATHGoogle Scholar
R Chartrand, W Yin, Proceedings of 2008 IEEE International Conference on Acoustics, Speech and Signal Processing. Iteratively reweighted algorithms for compressed sensing (2008)Google Scholar | CommonCrawl |
Triple integral question with spherical coordinates
I have this triple integral question. I'm pretty sure that it is solvable with spherical coordinates. I found the upper and lower limits of $\theta$ and $\rho$ but I couldn't find the limits of $\varphi$.
Here is the question:
$$\int_E (x^2+y^2+z^2)^{3/2}dxdydz$$ where E is in the first octant bounded by the plane $z=0$ and the hepisphere $x^2+y^2+z^2=9$ bounded above by the hepisphere $x^2+y^2+z^2=16$ and the planes $y=0$ and $y=x$.
I have a sketch so far. And these are the limits I have found. $3 \le \rho \le 4$ and $0 \le \theta \le \pi /4$ and also the inside part is $\rho^5sin(\varphi)d\rho d\varphi d\theta$.
Edit: typing mistake.
2nd Edit: I think $o \le \varphi \le \pi /2$ am I right?
integration spherical-coordinates
Cem Sarıer
Cem SarıerCem Sarıer
$\begingroup$ Is it $z$? or $z^2$? $\endgroup$
– caverac
$\begingroup$ it is $z^2$. thank you for informing. @caverac $\endgroup$
– Cem Sarıer
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} V & \,\,\,\stackrel{\mbox{def.}}{=}\, \iiint_{\large \mathbb{R}^{3}}\pars{x^{2} + y^{2} + z^{2}}^{3/2} \bracks{9 < x^{2} + y^{2} + z^{2} < 16}\bracks{0 < y < x}\dd x\,\dd y\,\dd z \\[1cm] & \stackrel{\mbox{Sph. Cord.}}{=} \iiint_{\atop {\!\!\Large\mathbb{R}^{3}}}r^{3}\bracks{9 < r^{2} < 16} \bracks{0 < r\sin\pars{\theta}\sin\pars{\phi} < r\sin\pars{\theta}\cos\pars{\phi}} \times \\[3mm] & \phantom{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\stackrel{\mbox{Sph. Cord.}}{=}} r^{2}\sin\pars{\theta}\,\dd r\,\dd\theta\,\dd \phi \\[1cm] & = \int_{0}^{2\pi}\int_{0}^{\pi}\int_{3}^{4} \bracks{0 < \sin\pars{\phi} < \cos\pars{\phi}} r^{5}\sin\pars{\theta}\,\dd r\,\dd\theta\,\dd \phi \\[5mm] & = \bracks{\int_{0}^{\pi}\sin\pars{\theta}\,\dd\theta} \pars{\int_{3}^{4}r^{5}\,\dd r} \int_{-\pi}^{\pi}\bracks{0 < \sin\pars{\phi} < \cos\pars{\phi}}\,\dd\phi \\[5mm] & = {3367 \over 3}\braces{% \int_{0}^{\pi}\bracks{0 < \sin\pars{\phi} < \cos\pars{\phi}}\,\dd\phi + \int_{0}^{\pi}\bracks{0 < -\sin\pars{\phi} < \cos\pars{\phi}}\,\dd\phi} \\[5mm] & = {3367 \over 3}\braces{% \int_{-\pi/2}^{\pi/2}\!\!\!\!\!\bracks{0 < \cos\pars{\phi} < -\sin\pars{\phi}}\,\dd\phi + \int_{-\pi/2}^{\pi/2}\!\!\!\!\!\bracks{0 < -\cos\pars{\phi} < -\sin\pars{\phi}}\,\dd\phi} \\[1cm] & = {3367 \over 3}\braces{% \int_{0}^{\pi/2}\!\!\!\!\!\bracks{0 < \cos\pars{\phi} < -\sin\pars{\phi}}\,\dd\phi + \int_{0}^{\pi/2}\!\!\!\!\!\bracks{0 < -\cos\pars{\phi} < -\sin\pars{\phi}}\,\dd\phi} \label{1}\tag{1} \\[3mm] & + {3367 \over 3}\braces{% \int_{0}^{\pi/2}\bracks{0 < \cos\pars{\phi} < \sin\pars{\phi}}\,\dd\phi + \int_{0}^{\pi/2}\bracks{0 < -\cos\pars{\phi} < \sin\pars{\phi}}\,\dd\phi} \end{align}
Integrals in line \eqref{1} vanishes out.
Then, \begin{align} V & = {3367 \over 3}\pars{\int_{\pi/4}^{\pi/2}\dd\phi + \int_{0}^{\pi/2}\dd\phi} = \bbx{{3367 \over 4}\,\pi} \approx 2644.4356 \end{align}
$\begingroup$ Thank you very much I just solved it too! $\endgroup$
$\begingroup$ @CemS. Thanks a lot. $\endgroup$
Using spherical coordinates, $x=\rho sin\varphi cos\theta, y= \rho sin\varphi sin\theta, z=\rho cos\varphi$ and $p=||(x,y,z)||=\sqrt{x^2+y^2+z^2} $ where $dxdydz=\rho^2 sin\varphi d\rho d\varphi d\theta $, we obrain
$$\int_E(x^2+y^2+z^2)^{3/2} dxdydz = \int_0^{\pi / 2} \int_0^{\pi / 4} \int_3^4 \rho^5 sin\varphi d\rho d\varphi d\theta $$ $$=1/6\int_0^{\pi / 2} sin \varphi \int_0^{\pi / 4} [p^6]_3^4 \space d\rho d\varphi d\theta $$ $$1/6\int_0^{\pi / 4} 3367 d\theta = 3367\pi/24 $$
Not the answer you're looking for? Browse other questions tagged integration spherical-coordinates or ask your own question.
Triple integral in spherical coordinates
How to compute triple integral in spherical coordinates
Finding the Limits of the Triple Integral (Spherical Coordinates)
Triple integrals in spherical coordinates, volume of octant
Bounds of integration in spherical coordinates
Spherical coordinates triple integral, help
Spherical coordinate integration help -- spheres.
Triple Intergal in spherical coordinates | CommonCrawl |
The quadratic $x^2-3x+9=x+41$ has two solutions. What is the positive difference between these solutions?
First we bring $x$ to the left side to get \[x^2-4x+9=41.\]We notice that the left side is almost the square $(x-2)^2=x^2-4x+4$. Subtracting 5 from both sides lets us complete the square on the left-hand side, \[x^2-4x+4=36,\]so \[(x-2)^2=6^2.\]Therefore $x=2\pm6$. The positive difference between these solutions is $8-(-4)=\boxed{12}$. | Math Dataset |
# Introduction to C++ libraries for machine learning
One of the most popular C++ libraries for machine learning is the Eigen library. Eigen is a widely used linear algebra library for C++ that provides a simple and efficient interface for working with vectors, matrices, and other mathematical objects.
Another popular library is the Shark library. Shark is a C++ library for machine learning that provides a wide range of algorithms and tools for tasks such as classification, regression, and clustering. Shark is built on top of the Eigen library, so it inherits its efficiency and ease of use.
In addition to these libraries, there are several other C++ libraries available for machine learning, such as the FANN library (Fast Artificial Neural Network Library) and the LibSVM library. Each of these libraries has its own unique features and capabilities, so it's important to choose the right one based on your specific needs and requirements.
## Exercise
Instructions:
1. Install the Eigen library on your computer.
2. Write a simple C++ program that uses the Eigen library to perform basic linear algebra operations, such as matrix multiplication and vector addition.
3. Compile and run your program to ensure that it works correctly.
### Solution
Here's a simple C++ program that uses the Eigen library to perform matrix multiplication and vector addition:
```cpp
#include <iostream>
#include <Eigen/Dense>
int main() {
Eigen::MatrixXd A(2, 2);
Eigen::VectorXd b(2);
Eigen::VectorXd x(2);
A << 1, 2,
3, 4;
b << 5, 6;
x = A.colPivHouseholderQr().solve(b);
std::cout << "The solution is: " << x << std::endl;
return 0;
}
```
This program defines a 2x2 matrix `A` and a 2x1 vector `b`, then uses the Eigen library to solve the linear system `Ax = b` and print the solution.
# Data preprocessing and handling
Before you can use machine learning algorithms to make forecasts, you need to preprocess and handle your data. This involves cleaning the data, handling missing values, and converting categorical variables into numerical ones.
One common approach to preprocessing data is to scale and normalize the features. Scaling involves converting all features to a common scale, usually between 0 and 1. This is done to ensure that the machine learning algorithms do not give undue importance to features with larger values.
Normalization, on the other hand, involves converting all features to have a mean of 0 and a standard deviation of 1. This is done to ensure that the machine learning algorithms do not give undue importance to features with larger variances.
In addition to scaling and normalization, you may also need to handle missing values in your data. One common approach is to use imputation, which involves estimating the missing values based on the available data.
## Exercise
Instructions:
1. Load a dataset with missing values and categorical variables.
2. Preprocess the data by scaling the features, normalizing them, and imputing the missing values.
3. Split the data into training and testing sets.
### Solution
Here's a simple example of how to preprocess a dataset using the Eigen library:
```cpp
#include <iostream>
#include <Eigen/Dense>
#include <vector>
#include <algorithm>
int main() {
Eigen::MatrixXd data(3, 3);
data << 1, 2, 3,
4, 5, 6,
7, 8, 9;
// Scale the data to a common range, e.g., between 0 and 1
Eigen::MatrixXd scaled_data = (data.array() - data.minCoeff()) / (data.maxCoeff() - data.minCoeff());
// Normalize the data to have a mean of 0 and a standard deviation of 1
Eigen::MatrixXd normalized_data = (scaled_data.rowwise() - scaled_data.colwise().mean()).array().rowwise() / scaled_data.colwise().stddev().array();
// Split the data into training and testing sets
Eigen::MatrixXd training_data = normalized_data.topRows(2);
Eigen::MatrixXd testing_data = normalized_data.bottomRows(1);
return 0;
}
```
This program demonstrates how to preprocess a dataset by scaling the features, normalizing them, and splitting the data into training and testing sets.
# Linear Regression for forecasting
Linear regression is a simple yet powerful machine learning algorithm that can be used for forecasting. It models the relationship between a dependent variable and one or more independent variables by fitting a linear equation to the data.
The equation for a simple linear regression is:
$$y = \beta_0 + \beta_1 x$$
where $y$ is the dependent variable, $x$ is the independent variable, and $\beta_0$ and $\beta_1$ are the regression coefficients.
To perform linear regression in C++, you can use the Shark library. The Shark library provides a simple interface for training and testing linear regression models.
## Exercise
Instructions:
1. Load a dataset that contains a dependent variable and one or more independent variables.
2. Split the data into training and testing sets.
3. Train a linear regression model on the training data.
4. Test the model on the testing data and calculate the mean squared error.
### Solution
Here's a simple example of how to perform linear regression using the Shark library:
```cpp
#include <iostream>
#include <shark/Data/Dataset.h>
#include <shark/Models/LinearRegression.h>
#include <shark/ObjectiveFunctions/Regression/MSE.h>
int main() {
// Load a dataset with a dependent variable and one or more independent variables
shark::Data<shark::RealVector> data;
// ...
// Split the data into training and testing sets
shark::Data<shark::RealVector> training_data = data.subData(0, data.size() / 2);
shark::Data<shark::RealVector> testing_data = data.subData(data.size() / 2, data.size() - data.size() / 2);
// Train a linear regression model on the training data
shark::LinearRegression<shark::RealVector, shark::RealVector> model;
model.train(training_data);
// Test the model on the testing data and calculate the mean squared error
shark::MSE<shark::RealVector, shark::RealVector> mse;
shark::Real error = mse(model.predict(testing_data), testing_data);
std::cout << "The mean squared error is: " << error << std::endl;
return 0;
}
```
This program demonstrates how to perform linear regression using the Shark library. It loads a dataset, splits it into training and testing sets, trains a linear regression model, and calculates the mean squared error on the testing data.
# K-Nearest Neighbors for classification and regression
K-Nearest Neighbors (KNN) is a popular machine learning algorithm that can be used for both classification and regression tasks. The idea behind KNN is to classify or regress a new data point based on the majority class or average value of its k nearest neighbors in the training data.
To use KNN in C++, you can use the Shark library. The Shark library provides a simple interface for training and testing KNN models.
## Exercise
Instructions:
1. Load a dataset that contains a dependent variable and one or more independent variables.
2. Split the data into training and testing sets.
3. Train a KNN model on the training data.
4. Test the model on the testing data and calculate the classification error rate or mean squared error.
### Solution
Here's a simple example of how to perform KNN using the Shark library:
```cpp
#include <iostream>
#include <shark/Data/Dataset.h>
#include <shark/Models/KNN.h>
#include <shark/ObjectiveFunctions/Classification/ErrorRate.h>
#include <shark/ObjectiveFunctions/Regression/MSE.h>
int main() {
// Load a dataset with a dependent variable and one or more independent variables
shark::Data<shark::RealVector> data;
// ...
// Split the data into training and testing sets
shark::Data<shark::RealVector> training_data = data.subData(0, data.size() / 2);
shark::Data<shark::RealVector> testing_data = data.subData(data.size() / 2, data.size() - data.size() / 2);
// Train a KNN model on the training data
shark::KNN<shark::RealVector, shark::RealVector> model;
model.train(training_data);
// Test the model on the testing data and calculate the classification error rate or mean squared error
if (model.isClassification()) {
shark::ErrorRate<shark::RealVector, shark::RealVector> error_rate;
shark::Real error = error_rate(model.predict(testing_data), testing_data);
std::cout << "The classification error rate is: " << error << std::endl;
} else {
shark::MSE<shark::RealVector, shark::RealVector> mse;
shark::Real error = mse(model.predict(testing_data), testing_data);
std::cout << "The mean squared error is: " << error << std::endl;
}
return 0;
}
```
This program demonstrates how to perform KNN using the Shark library. It loads a dataset, splits it into training and testing sets, trains a KNN model, and calculates the classification error rate or mean squared error on the testing data.
# Decision Trees for classification and regression
Decision trees are a popular machine learning algorithm that can be used for both classification and regression tasks. The idea behind decision trees is to recursively split the data into subsets based on the values of a set of features, and then make a prediction based on the majority class or average value in each subset.
To use decision trees in C++, you can use the Shark library. The Shark library provides a simple interface for training and testing decision tree models.
## Exercise
Instructions:
1. Load a dataset that contains a dependent variable and one or more independent variables.
2. Split the data into training and testing sets.
3. Train a decision tree model on the training data.
4. Test the model on the testing data and calculate the classification error rate or mean squared error.
### Solution
Here's a simple example of how to perform decision trees using the Shark library:
```cpp
#include <iostream>
#include <shark/Data/Dataset.h>
#include <shark/Models/DecisionTree.h>
#include <shark/ObjectiveFunctions/Classification/ErrorRate.h>
#include <shark/ObjectiveFunctions/Regression/MSE.h>
int main() {
// Load a dataset with a dependent variable and one or more independent variables
shark::Data<shark::RealVector> data;
// ...
// Split the data into training and testing sets
shark::Data<shark::RealVector> training_data = data.subData(0, data.size() / 2);
shark::Data<shark::RealVector> testing_data = data.subData(data.size() / 2, data.size() - data.size() / 2);
// Train a decision tree model on the training data
shark::DecisionTree<shark::RealVector, shark::RealVector> model;
model.train(training_data);
// Test the model on the testing data and calculate the classification error rate or mean squared error
if (model.isClassification()) {
shark::ErrorRate<shark::RealVector, shark::RealVector> error_rate;
shark::Real error = error_rate(model.predict(testing_data), testing_data);
std::cout << "The classification error rate is: " << error << std::endl;
} else {
shark::MSE<shark::RealVector, shark::RealVector> mse;
shark::Real error = mse(model.predict(testing_data), testing_data);
std::cout << "The mean squared error is: " << error << std::endl;
}
return 0;
}
```
This program demonstrates how to perform decision trees using the Shark library. It loads a dataset, splits it into training and testing sets, trains a decision tree model, and calculates the classification error rate or mean squared error on the testing data.
# Random Forest for classification and regression
Random forests are an ensemble learning method that can be used for both classification and regression tasks. The idea behind random forests is to train a large number of decision trees on random subsets of the data, and then make a prediction based on the majority class or average value of the predictions of all the trees.
To use random forests in C++, you can use the Shark library. The Shark library provides a simple interface for training and testing random forest models.
## Exercise
Instructions:
1. Load a dataset that contains a dependent variable and one or more independent variables.
2. Split the data into training and testing sets.
3. Train a random forest model on the training data.
4. Test the model on the testing data and calculate the classification error rate or mean squared error.
### Solution
Here's a simple example of how to perform random forests using the Shark library:
```cpp
#include <iostream>
#include <shark/Data/Dataset.h>
#include <shark/Models/RandomForest.h>
#include <shark/ObjectiveFunctions/Classification/ErrorRate.h>
#include <shark/ObjectiveFunctions/Regression/MSE.h>
int main() {
// Load a dataset with a dependent variable and one or more independent variables
shark::Data<shark::RealVector> data;
// ...
// Split the data into training and testing sets
shark::Data<shark::RealVector> training_data = data.subData(0, data.size() / 2);
shark::Data<shark::RealVector> testing_data = data.subData(data.size() / 2, data.size() - data.size() / 2);
// Train a random forest model on the training data
shark::RandomForest<shark::RealVector, shark::RealVector> model;
model.train(training_data);
// Test the model on the testing data and calculate the classification error rate or mean squared error
if (model.isClassification()) {
shark::ErrorRate<shark::RealVector, shark::RealVector> error_rate;
shark::Real error = error_rate(model.predict(testing_data), testing_data);
std::cout << "The classification error rate is: " << error << std::endl;
} else {
shark::MSE<shark::RealVector, shark::RealVector> mse;
shark::Real error = mse(model.predict(testing_data), testing_data);
std::cout << "The mean squared error is: " << error << std::endl;
}
return 0;
}
```
This program demonstrates how to perform random forests using the Shark library. It loads a dataset, splits it into training and testing sets, trains a random forest model, and calculates the classification error rate or mean squared error on the testing data.
# Gradient Boosting for classification and regression
Gradient boosting is an ensemble learning method that can be used for both classification and regression tasks. The idea behind gradient boosting is to train a sequence of weak learners (e.g., decision trees) on the residuals of the previous learners, and then make a prediction based on the sum of the predictions of all the learners.
To use gradient boosting in C++, you can use the Shark library. The Shark library provides a simple interface for training and testing gradient boosting models.
## Exercise
Instructions:
1. Load a dataset that contains a dependent variable and one or more independent variables.
2. Split the data into training and testing sets.
3. Train a gradient boosting model on the training data.
4. Test the model on the testing data and calculate the classification error rate or mean squared error.
### Solution
Here's a simple example of how to perform gradient boosting using the Shark library:
```cpp
#include <iostream>
#include <shark/Data/Dataset.h>
#include <shark/Models/GradientBoosting.h>
#include <shark/ObjectiveFunctions/Classification/ErrorRate.h>
#include <shark/ObjectiveFunctions/Regression/MSE.h>
int main() {
// Load a dataset with a dependent variable and one or more independent variables
shark::Data<shark::RealVector> data;
// ...
// Split the data into training and testing sets
shark::Data<shark::RealVector> training_data = data.subData(0, data.size() / 2);
shark::Data<shark::RealVector> testing_data = data.subData(data.size() / 2, data.size() - data.size() / 2);
// Train a gradient boosting model on the training data
shark::GradientBoosting<shark::RealVector, shark::RealVector> model;
model.train(training_data);
// Test the model on the testing data and calculate the classification error rate or mean squared error
if (model.isClassification()) {
shark::ErrorRate<shark::RealVector, shark::RealVector> error_rate;
shark::Real error = error_rate(model.predict(testing_data), testing_data);
std::cout << "The classification error rate is: " << error << std::endl;
} else {
shark::MSE<shark::RealVector, shark::RealVector> mse;
shark::Real error = mse(model.predict(testing_data), testing_data);
std::cout << "The mean squared error is: " << error << std::endl;
}
return 0;
}
```
This program demonstrates how to perform gradient boosting using the Shark library. It loads a dataset, splits it into training and testing sets, trains a gradient boosting model, and calculates the classification error rate or mean squared error on the testing data.
# Support Vector Machines for classification and regression
Support vector machines (SVMs) are a popular machine learning algorithm that can be used for both classification and regression tasks. The idea behind SVMs is to find a hyperplane that separates the data into different classes or predicts the values of the dependent variable.
To use SVMs in C++, you can use the Shark library. The Shark library provides a simple interface for training and testing SVM models.
## Exercise
Instructions:
1. Load a dataset that contains a dependent variable and one or more independent variables.
2. Split the data into training and testing sets.
3. Train an SVM model on the training data.
4. Test the model on the testing data and calculate the classification error rate or mean squared error.
### Solution
Here's a simple example of how to perform SVMs using the Shark library:
```cpp
#include <iostream>
#include <shark/Data/Dataset.h>
#include <shark/Models/SVM.h>
#include <shark/ObjectiveFunctions/Classification/ErrorRate.h>
#include <shark/ObjectiveFunctions/Regression/MSE.h>
int main() {
// Load a dataset with a dependent variable and one or more independent variables
shark::Data<shark::RealVector> data;
// ...
// Split the data into training and testing sets
shark::Data<shark::RealVector> training_data = data.subData(0, data.size() / 2);
shark::Data<shark::RealVector> testing_data = data.subData(data.size() / 2, data.size() - data.size() / 2);
// Train an SVM model on the training data
shark::SVM<shark::RealVector, shark::RealVector> model;
model.train(training_data);
// Test the model on the testing data and calculate the classification error rate or mean squared error
if (model.isClassification()) {
shark::ErrorRate<shark::RealVector, shark::RealVector> error_rate;
shark::Real error = error_rate(model.predict(testing_data), testing_data);
std::cout << "The classification error rate is: " << error << std::endl;
} else {
shark::MSE<shark::RealVector, shark::RealVector> mse;
shark::Real error = mse(model.predict(testing_data), testing_data);
std::cout << "The mean squared error is: " << error << std::endl;
}
return 0;
}
```
This program demonstrates how to perform SVMs using the Shark library. It loads a dataset, splits it into training and testing sets, trains an SVM model, and calculates the classification error rate or mean squared error on the testing data.
# Model evaluation and selection
When you have trained a machine learning model, it's important to evaluate its performance and select the best model based on the results. This can be done using various evaluation metrics, such as accuracy, precision, recall, F1 score, and mean squared error.
In addition to evaluation metrics, it's also important to consider the interpretability of the model. Some models, such as decision trees and random forests, are easy to understand and interpret, while others, such as neural networks and gradient boosting, can be more complex and harder to understand.
When selecting a model, it's important to consider factors such as the size of the dataset, the complexity of the problem, and the computational resources available. It may be necessary to try multiple models and compare their performance to find the best one for your specific problem.
## Exercise
Instructions:
1. Train multiple machine learning models on a dataset.
2. Evaluate the performance of each model using appropriate evaluation metrics.
3. Select the best model based on the evaluation results.
### Solution
Here's a simple example of how to evaluate and select the best model using the Shark library:
```cpp
#include <iostream>
#include <shark/Data/Dataset.h>
#include <shark/Models/LinearRegression.h>
#include <shark/Models/KNN.h>
#include <shark/Models/DecisionTree.h>
#include <shark/Models/RandomForest.h>
#include <shark/Models/GradientBoosting.h>
#include <shark/Models/SVM.h>
#include <shark/ObjectiveFunctions/Classification/ErrorRate.h>
#include <shark/ObjectiveFunctions/Regression/MSE.h>
int main() {
// Load a dataset with a dependent variable and one or more independent variables
shark::Data<shark::RealVector> data;
// ...
// Split the data into training and testing sets
shark::Data<shark::RealVector> training_data = data.subData(0, data.size() / 2);
shark::Data<shark::RealVector> testing_data = data.subData(data.size() / 2, data.size() - data.size() /
# Hands-on examples and case studies
We will start by implementing a simple linear regression model to predict the price of a house based on its size and location. Then, we will move on to more complex models, such as K-nearest neighbors, decision trees, random forests, gradient boosting, and support vector machines.
Throughout the examples, we will emphasize the importance of data preprocessing, feature selection, and model evaluation. We will also discuss the challenges and limitations of each model and provide suggestions for improvement.
## Exercise
Instructions:
1. Implement a linear regression model to predict the price of a house.
2. Evaluate the model's performance using appropriate evaluation metrics.
3. Discuss the challenges and limitations of the model.
### Solution
Here's a simple example of how to implement a linear regression model using the Shark library:
```cpp
#include <iostream>
#include <shark/Data/Dataset.h>
#include <shark/Models/LinearRegression.h>
#include <shark/ObjectiveFunctions/Regression/MSE.h>
int main() {
// Load a dataset with a dependent variable and one or more independent variables
shark::Data<shark::RealVector> data;
// ...
// Split the data into training and testing sets
shark::Data<shark::RealVector> training_data = data.subData(0, data.size() / 2);
shark::Data<shark::RealVector> testing_data = data.subData(data.size() / 2, data.size() - data.size() / 2);
// Create a linear regression model
shark::LinearRegression<shark::RealVector> model;
// Train the model on the training data
model.train(training_data);
// Evaluate the model's performance on the testing data
shark::MSE<shark::RealVector> mse;
double error = mse(model, testing_data);
std::cout << "Mean squared error: " << error << std::endl;
return 0;
}
```
# Future developments and challenges in forecasting with machine learning
As machine learning techniques continue to advance, new challenges and opportunities arise in the field of forecasting. Some of the future developments and challenges include:
- Deep learning: Deep learning models, such as neural networks and recurrent neural networks, have shown promising results in forecasting tasks. However, they require large amounts of data and computational resources, which may be difficult to obtain in some applications.
- Transfer learning: Transfer learning involves using pre-trained models and fine-tuning them for specific forecasting tasks. This can help reduce the amount of data and computational resources required and improve the model's performance.
- Explainable AI: As machine learning models become more complex, it becomes increasingly important to understand how they make predictions. Explainable AI techniques, such as local interpretable model-agnostic explanations (LIME) and SHAP values, can help improve the transparency and interpretability of machine learning models.
- Ethical considerations: As machine learning models become more powerful, it is crucial to consider the ethical implications of their use. This includes issues such as bias and fairness, privacy, and the potential misuse of forecasting models.
## Exercise
Instructions:
1. Research the latest developments in machine learning for forecasting.
2. Discuss the challenges and opportunities in the field of forecasting with machine learning.
3. Consider the ethical implications of using machine learning for forecasting.
### Solution
This exercise is meant to encourage further research and discussion on the topic. There is no specific solution to provide. | Textbooks |
Cough at 1000 km/h?
How fast does air move in the airways during a cough?
The following passage is from Talley and O'Connor's Clinical examination: a systematic guide to physical diagnosis (emphasis mine):
Cough is a common presenting respiratory symptom. It occurs when deep inspiration is followed by explosive expiration. Flow rates of air in the trachea approach the speed of sound during a forceful cough. Coughing enables the airways to be cleared of secretions and foreign bodies.
The speed of sound claim is unreferenced. I have found mention of coughs approaching the speed of sound in numerous popular sources (e.g. here, here, here (where it says 1000 km/h), and here), and innumerous books (e.g. here, here, here, here, here, and here); none of these references the claim. The same claim is also here. This book has a more exact (unreferenced!) claim:
Velocities as great as 28 000 cm/s (85% of the speed of sound) have been reported, but it is impossible to determine the gas velocity at points of airway constriction, where the greatest shearing forces will be developed. During this phase there is dynamic collapse in the bronchial tree, with large pressure gradients across the collapsed segment.
That speed is just over 1000 km/h. When I have searched for research literature behind this, I've only found much lower velocities at the mouth (rather than at a narrower location like the glottis), e.g. a peak cough velocity of 22m/s, also 11.2m/s, and 28.8m/s. The closest to a reference I found was this book with the following:
A cough comprises: ... sudden opening of the glottis, causing air to explode outwards at up to 500 mph or 85% of the speed of sound (Irwin et al, 1998), shearing secretions off the airway walls.
Irwin et al isn't primary literature either, and references Comroe JH, Jr. Special acts involving breathing. In: Physiology of respiration: an introductory text. 2nd ed. Chicago: Year Book Medical Publishers, 1974; 230-31. I don't have access to this book (does anyone here?), but I expect the references only continue from there.
My question is this: how fast is a cough in the airways (I am interested because such an explosive rush of air could explain the substantial damage seen in chronic cough), and does anyone know where the 1000 km/h claim comes from, or can point me to a legitimate reference?
human-biology respiration breathing lungs measurement
edited Nov 9 '16 at 5:02
AnonAnon
$\begingroup$ If you are asking about whether a claim is correct/or not and where it comes from, you might receive more feedback on the skeptics stackexchange. $\endgroup$ – Ebbinghaus Nov 9 '16 at 6:25
$\begingroup$ Perhaps. I'm not a skeptic though, I'm sure the claim is probably true; I would just be interested to find a reference for it so I can see how it was measured. I thought it would be more relevant on biology.SE. $\endgroup$ – Anon Nov 9 '16 at 6:29
This reference from CHEST lists 21 clinically measured peak flow rates during various modes of coughing. Of these patients, and for unassisted cough, the highest peak flow is about 4 liters/sec. The human trachea ranges from 13 to 27 mm diameter. The relationship between velocity, $V$ and flow $Q$ is
$$ V=\frac{Q}{A}$$
Assume the 4 liters/sec = 4000 cm^3/sec and minimum diameter, 13 mm = 1.3 cm, the cross section area being
$$A = \pi (D/2)^2 = 1.3 cm^2$$
Plugging in
$$ V=\frac{4000}{1.3} = 3077 cm/sec$$
which is a far cry from 28,000 cm/sec, so at this point I'm skeptical.
Things to consider is that the data taken in the paper was from sick humans so perhaps a healthy (and athletic) person may be able to exert much higher flow rates. But then healthy people generally are not stimulated to cough as much as a sick person with an airway compromised by sputum.
Although lower airways do have smaller diameters, the flow measured at the trachea is divided among them, so you wouldn't expect to see peak velocities in the lower airways, but rather the accumulation in the trachea.
docsciencedocscience
$\begingroup$ Nice answer +1. I did a similar calculation to you, but I thought perhaps using that size for the trachea wasn't legitimate; a cough begins with a closed glottis, so in principle the airflow in the first moments is through a very narrow opening between the vocal folds and so in principle the flow rate could be higher. The flows in your paper are at the mouth which would potentially lead to a slower pickup in velocity at the beginning (if the actual speed were actually near the speed of sound which is the maximum speed that an impulse can be transmitted in air). I agree about smaller airways. $\endgroup$ – Anon Mar 21 '17 at 5:43
Not the answer you're looking for? Browse other questions tagged human-biology respiration breathing lungs measurement or ask your own question.
Can Serous inflammation on pleura pulmonalis cause dry cough and runny nouse?
What is the cause of dry cough?
Why does our voice change when we get affected by cold or cough? | CommonCrawl |
Mathematics > K-Theory and Homology
[Submitted on 16 Oct 2019 (v1), last revised 28 Apr 2022 (this version, v8)]
Title:Hilbert's third problem and a conjecture of Goncharov
Authors:Jonathan Campbell, Inna Zakharevich
Abstract: In this paper we reduce the generalized Hilbert's third problem about Dehn invariants and scissors congruence classes to the injectivity of certain Chern--Simons invariants. We also establish a version of a conjecture of Goncharov relating scissors congruence groups of polytopes and the algebraic $K$-theory of $\mathbf{C}$.
Comments: Current version is slightly different from previous: the maps relating K-theory and the Goncharov complex go in the other direction
Subjects: K-Theory and Homology (math.KT); Algebraic Geometry (math.AG); Algebraic Topology (math.AT)
MSC classes: 52B45, 19E99, 19D55, 55U10, 18G40, 19F27
Cite as: arXiv:1910.07112 [math.KT]
(or arXiv:1910.07112v8 [math.KT] for this version)
From: Inna Zakharevich [view email]
[v1] Wed, 16 Oct 2019 00:55:54 UTC (53 KB)
[v2] Sat, 8 Aug 2020 19:03:41 UTC (61 KB)
[v3] Wed, 26 Aug 2020 19:21:14 UTC (61 KB)
[v5] Thu, 7 Jan 2021 21:03:15 UTC (0 KB)
[v6] Fri, 19 Mar 2021 19:15:14 UTC (73 KB)
[v7] Mon, 4 Apr 2022 21:38:32 UTC (73 KB)
[v8] Thu, 28 Apr 2022 17:48:52 UTC (73 KB)
math.KT
math.AG
math.AT | CommonCrawl |
Cross-validation (statistics)
Cross-validation,[2][3][4] sometimes called rotation estimation[5][6][7] or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. Cross-validation is a resampling method that uses different portions of the data to test and train a model on different iterations. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. In a prediction problem, a model is usually given a dataset of known data on which training is run (training dataset), and a dataset of unknown data (or first seen data) against which the model is tested (called the validation dataset or testing set).[8][9] The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias[10] and to give an insight on how the model will generalize to an independent dataset (i.e., an unknown dataset, for instance from a real problem).
One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set). To reduce variability, in most methods multiple rounds of cross-validation are performed using different partitions, and the validation results are combined (e.g. averaged) over the rounds to give an estimate of the model's predictive performance.
In summary, cross-validation combines (averages) measures of fitness in prediction to derive a more accurate estimate of model prediction performance.[11]
Motivation
Assume a model with one or more unknown parameters, and a data set to which the model can be fit (the training data set). The fitting process optimizes the model parameters to make the model fit the training data as well as possible. If an independent sample of validation data is taken from the same population as the training data, it will generally turn out that the model does not fit the validation data as well as it fits the training data. The size of this difference is likely to be large especially when the size of the training data set is small, or when the number of parameters in the model is large. Cross-validation is a way to estimate the size of this effect.
Example: linear regression
In linear regression, there exist real response values $ y_{1},\ldots ,y_{n}$, and n p-dimensional vector covariates x1, ..., xn. The components of the vector xi are denoted xi1, ..., xip. If least squares is used to fit a function in the form of a hyperplane ŷ = a + βTx to the data (xi, yi) 1 ≤ i ≤ n, then the fit can be assessed using the mean squared error (MSE). The MSE for given estimated parameter values a and β on the training set (xi, yi) 1 ≤ i ≤ n is defined as:
${\begin{aligned}{\text{MSE}}&={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-{\hat {y}}_{i})^{2}={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-a-{\boldsymbol {\beta }}^{T}\mathbf {x} _{i})^{2}\\&={\frac {1}{n}}\sum _{i=1}^{n}(y_{i}-a-\beta _{1}x_{i1}-\dots -\beta _{p}x_{ip})^{2}\end{aligned}}$
If the model is correctly specified, it can be shown under mild assumptions that the expected value of the MSE for the training set is (n − p − 1)/(n + p + 1) < 1 times the expected value of the MSE for the validation set[12] (the expected value is taken over the distribution of training sets). Thus, a fitted model and computed MSE on the training set will result in an optimistically biased assessment of how well the model will fit an independent data set. This biased estimate is called the in-sample estimate of the fit, whereas the cross-validation estimate is an out-of-sample estimate.
Since in linear regression it is possible to directly compute the factor (n − p − 1)/(n + p + 1) by which the training MSE underestimates the validation MSE under the assumption that the model specification is valid, cross-validation can be used for checking whether the model has been overfitted, in which case the MSE in the validation set will substantially exceed its anticipated value. (Cross-validation in the context of linear regression is also useful in that it can be used to select an optimally regularized cost function.)
General case
In most other regression procedures (e.g. logistic regression), there is no simple formula to compute the expected out-of-sample fit. Cross-validation is, thus, a generally applicable way to predict the performance of a model on unavailable data using numerical computation in place of theoretical analysis.
Types
Two types of cross-validation can be distinguished: exhaustive and non-exhaustive cross-validation.
Exhaustive cross-validation
Exhaustive cross-validation methods are cross-validation methods which learn and test on all possible ways to divide the original sample into a training and a validation set.
Leave-p-out cross-validation
Leave-p-out cross-validation (LpO CV) involves using p observations as the validation set and the remaining observations as the training set. This is repeated on all ways to cut the original sample on a validation set of p observations and a training set.[13]
LpO cross-validation require training and validating the model $C_{p}^{n}$ times, where n is the number of observations in the original sample, and where $C_{p}^{n}$ is the binomial coefficient. For p > 1 and for even moderately large n, LpO CV can become computationally infeasible. For example, with n = 100 and p = 30, $C_{30}^{100}\approx 3\times 10^{25}.$
A variant of LpO cross-validation with p=2 known as leave-pair-out cross-validation has been recommended as a nearly unbiased method for estimating the area under ROC curve of binary classifiers.[14]
Leave-one-out cross-validation
Leave-one-out cross-validation (LOOCV) is a particular case of leave-p-out cross-validation with p = 1. The process looks similar to jackknife; however, with cross-validation one computes a statistic on the left-out sample(s), while with jackknifing one computes a statistic from the kept samples only.
LOO cross-validation requires less computation time than LpO cross-validation because there are only $C_{1}^{n}=n$ passes rather than $C_{p}^{n}$. However, $n$ passes may still require quite a large computation time, in which case other approaches such as k-fold cross validation may be more appropriate.[15]
Pseudo-code algorithm:
Input:
x, {vector of length N with x-values of incoming points}
y, {vector of length N with y-values of the expected result}
interpolate( x_in, y_in, x_out ), { returns the estimation for point x_out after the model is trained with x_in-y_in pairs}
Output:
err, {estimate for the prediction error}
Steps:
err ← 0
for i ← 1, ..., N do
// define the cross-validation subsets
x_in ← (x[1], ..., x[i − 1], x[i + 1], ..., x[N])
y_in ← (y[1], ..., y[i − 1], y[i + 1], ..., y[N])
x_out ← x[i]
y_out ← interpolate(x_in, y_in, x_out)
err ← err + (y[i] − y_out)^2
end for
err ← err/N
Non-exhaustive cross-validation
Non-exhaustive cross validation methods do not compute all ways of splitting the original sample. These methods are approximations of leave-p-out cross-validation.
k-fold cross-validation
In k-fold cross-validation, the original sample is randomly partitioned into k equal sized subsamples. Of the k subsamples, a single subsample is retained as the validation data for testing the model, and the remaining k − 1 subsamples are used as training data. The cross-validation process is then repeated k times, with each of the k subsamples used exactly once as the validation data. The k results can then be averaged to produce a single estimation. The advantage of this method over repeated random sub-sampling (see below) is that all observations are used for both training and validation, and each observation is used for validation exactly once. 10-fold cross-validation is commonly used,[16] but in general k remains an unfixed parameter.
For example, setting k = 2 results in 2-fold cross-validation. In 2-fold cross-validation, we randomly shuffle the dataset into two sets d0 and d1, so that both sets are equal size (this is usually implemented by shuffling the data array and then splitting it in two). We then train on d0 and validate on d1, followed by training on d1 and validating on d0.
When k = n (the number of observations), k-fold cross-validation is equivalent to leave-one-out cross-validation.[17]
In stratified k-fold cross-validation, the partitions are selected so that the mean response value is approximately equal in all the partitions. In the case of binary classification, this means that each partition contains roughly the same proportions of the two types of class labels.
In repeated cross-validation the data is randomly split into k partitions several times. The performance of the model can thereby be averaged over several runs, but this is rarely desirable in practice.[18]
When many different statistical or machine learning models are being considered, greedy k-fold cross-validation can be used to quickly identify the most promising candidate models.[19]
Holdout method
In the holdout method, we randomly assign data points to two sets d0 and d1, usually called the training set and the test set, respectively. The size of each of the sets is arbitrary although typically the test set is smaller than the training set. We then train (build a model) on d0 and test (evaluate its performance) on d1.
In typical cross-validation, results of multiple runs of model-testing are averaged together; in contrast, the holdout method, in isolation, involves a single run. It should be used with caution because without such averaging of multiple runs, one may achieve highly misleading results. One's indicator of predictive accuracy (F*) will tend to be unstable since it will not be smoothed out by multiple iterations (see below). Similarly, indicators of the specific role played by various predictor variables (e.g., values of regression coefficients) will tend to be unstable.
While the holdout method can be framed as "the simplest kind of cross-validation",[20] many sources instead classify holdout as a type of simple validation, rather than a simple or degenerate form of cross-validation.[6][21]
Repeated random sub-sampling validation
This method, also known as Monte Carlo cross-validation,[22] creates multiple random splits of the dataset into training and validation data.[23] For each such split, the model is fit to the training data, and predictive accuracy is assessed using the validation data. The results are then averaged over the splits. The advantage of this method (over k-fold cross validation) is that the proportion of the training/validation split is not dependent on the number of iterations (i.e., the number of partitions). The disadvantage of this method is that some observations may never be selected in the validation subsample, whereas others may be selected more than once. In other words, validation subsets may overlap. This method also exhibits Monte Carlo variation, meaning that the results will vary if the analysis is repeated with different random splits.
As the number of random splits approaches infinity, the result of repeated random sub-sampling validation tends towards that of leave-p-out cross-validation.
In a stratified variant of this approach, the random samples are generated in such a way that the mean response value (i.e. the dependent variable in the regression) is equal in the training and testing sets. This is particularly useful if the responses are dichotomous with an unbalanced representation of the two response values in the data.
A method that applies repeated random sub-sampling is RANSAC.[24]
Nested cross-validation
When cross-validation is used simultaneously for selection of the best set of hyperparameters and for error estimation (and assessment of generalization capacity), a nested cross-validation is required. Many variants exist. At least two variants can be distinguished:
k*l-fold cross-validation
This is a truly nested variant which contains an outer loop of k sets and an inner loop of l sets. The total data set is split into k sets. One by one, a set is selected as the (outer) test set and the k - 1 other sets are combined into the corresponding outer training set. This is repeated for each of the k sets. Each outer training set is further sub-divided into l sets. One by one, a set is selected as inner test (validation) set and the l - 1 other sets are combined into the corresponding inner training set. This is repeated for each of the l sets. The inner training sets are used to fit model parameters, while the outer test set is used as a validation set to provide an unbiased evaluation of the model fit. Typically, this is repeated for many different hyperparameters (or even different model types) and the validation set is used to determine the best hyperparameter set (and model type) for this inner training set. After this, a new model is fit on the entire outer training set, using the best set of hyperparameters from the inner cross-validation. The performance of this model is then evaluated using the outer test set.
k-fold cross-validation with validation and test set
This is a type of k*l-fold cross-validation when l = k - 1. A single k-fold cross-validation is used with both a validation and test set. The total data set is split into k sets. One by one, a set is selected as test set. Then, one by one, one of the remaining sets is used as a validation set and the other k - 2 sets are used as training sets until all possible combinations have been evaluated. Similar to the k*l-fold cross validation, the training set is used for model fitting and the validation set is used for model evaluation for each of the hyperparameter sets. Finally, for the selected parameter set, the test set is used to evaluate the model with the best parameter set. Here, two variants are possible: either evaluating the model that was trained on the training set or evaluating a new model that was fit on the combination of the training and the validation set.
Measures of fit
The goal of cross-validation is to estimate the expected level of fit of a model to a data set that is independent of the data that were used to train the model. It can be used to estimate any quantitative measure of fit that is appropriate for the data and model. For example, for binary classification problems, each case in the validation set is either predicted correctly or incorrectly. In this situation the misclassification error rate can be used to summarize the fit, although other measures like positive predictive value could also be used. When the value being predicted is continuously distributed, the mean squared error, root mean squared error or median absolute deviation could be used to summarize the errors.
Using prior information
When users apply cross-validation to select a good configuration $\lambda $, then they might want to balance the cross-validated choice with their own estimate of the configuration. In this way, they can attempt to counter the volatility of cross-validation when the sample size is small and include relevant information from previous research. In a forecasting combination exercise, for instance, cross-validation can be applied to estimate the weights that are assigned to each forecast. Since a simple equal-weighted forecast is difficult to beat, a penalty can be added for deviating from equal weights.[25] Or, if cross-validation is applied to assign individual weights to observations, then one can penalize deviations from equal weights to avoid wasting potentially relevant information.[25] Hoornweg (2018) shows how a tuning parameter $\gamma $ can be defined so that a user can intuitively balance between the accuracy of cross-validation and the simplicity of sticking to a reference parameter $\lambda _{R}$ that is defined by the user.
If $\lambda _{i}$ denotes the $i^{th}$ candidate configuration that might be selected, then the loss function that is to be minimized can be defined as
$L_{\lambda _{i}}=(1-\gamma ){\mbox{ Relative Accuracy}}_{i}+\gamma {\mbox{ Relative Simplicity}}_{i}.$
Relative accuracy can be quantified as ${\mbox{MSE}}(\lambda _{i})/{\mbox{MSE}}(\lambda _{R})$, so that the mean squared error of a candidate $\lambda _{i}$ is made relative to that of a user-specified $\lambda _{R}$. The relative simplicity term measures the amount that $\lambda _{i}$ deviates from $\lambda _{R}$ relative to the maximum amount of deviation from $\lambda _{R}$. Accordingly, relative simplicity can be specified as ${\frac {(\lambda _{i}-\lambda _{R})^{2}}{(\lambda _{\max }-\lambda _{R})^{2}}}$, where $\lambda _{\max }$ corresponds to the $\lambda $ value with the highest permissible deviation from $\lambda _{R}$. With $\gamma \in [0,1]$, the user determines how high the influence of the reference parameter is relative to cross-validation.
One can add relative simplicity terms for multiple configurations $c=1,2,...,C$ by specifying the loss function as
$L_{\lambda _{i}}={\mbox{ Relative Accuracy}}_{i}+\sum _{c=1}^{C}{\frac {\gamma _{c}}{1-\gamma _{c}}}{\mbox{ Relative Simplicity}}_{i,c}.$
Hoornweg (2018) shows that a loss function with such an accuracy-simplicity tradeoff can also be used to intuitively define shrinkage estimators like the (adaptive) lasso and Bayesian / ridge regression.[25] Click on the lasso for an example.
Statistical properties
Suppose we choose a measure of fit F, and use cross-validation to produce an estimate F* of the expected fit EF of a model to an independent data set drawn from the same population as the training data. If we imagine sampling multiple independent training sets following the same distribution, the resulting values for F* will vary. The statistical properties of F* result from this variation.
The cross-validation estimator F* is very nearly unbiased for EF.[26] The reason that it is slightly biased is that the training set in cross-validation is slightly smaller than the actual data set (e.g. for LOOCV the training set size is n − 1 when there are n observed cases). In nearly all situations, the effect of this bias will be conservative in that the estimated fit will be slightly biased in the direction suggesting a poorer fit. In practice, this bias is rarely a concern.
The variance of F* can be large.[27][28] For this reason, if two statistical procedures are compared based on the results of cross-validation, the procedure with the better estimated performance may not actually be the better of the two procedures (i.e. it may not have the better value of EF). Some progress has been made on constructing confidence intervals around cross-validation estimates,[27] but this is considered a difficult problem.
Computational issues
Most forms of cross-validation are straightforward to implement as long as an implementation of the prediction method being studied is available. In particular, the prediction method can be a "black box" – there is no need to have access to the internals of its implementation. If the prediction method is expensive to train, cross-validation can be very slow since the training must be carried out repeatedly. In some cases such as least squares and kernel regression, cross-validation can be sped up significantly by pre-computing certain values that are needed repeatedly in the training, or by using fast "updating rules" such as the Sherman–Morrison formula. However one must be careful to preserve the "total blinding" of the validation set from the training procedure, otherwise bias may result. An extreme example of accelerating cross-validation occurs in linear regression, where the results of cross-validation have a closed-form expression known as the prediction residual error sum of squares (PRESS).
Limitations and misuse
Cross-validation only yields meaningful results if the validation set and training set are drawn from the same population and only if human biases are controlled.
In many applications of predictive modeling, the structure of the system being studied evolves over time (i.e. it is "non-stationary"). Both of these can introduce systematic differences between the training and validation sets. For example, if a model for predicting stock values is trained on data for a certain five-year period, it is unrealistic to treat the subsequent five-year period as a draw from the same population. As another example, suppose a model is developed to predict an individual's risk for being diagnosed with a particular disease within the next year. If the model is trained using data from a study involving only a specific population group (e.g. young people or males), but is then applied to the general population, the cross-validation results from the training set could differ greatly from the actual predictive performance.
In many applications, models also may be incorrectly specified and vary as a function of modeler biases and/or arbitrary choices. When this occurs, there may be an illusion that the system changes in external samples, whereas the reason is that the model has missed a critical predictor and/or included a confounded predictor. New evidence is that cross-validation by itself is not very predictive of external validity, whereas a form of experimental validation known as swap sampling that does control for human bias can be much more predictive of external validity.[29] As defined by this large MAQC-II study across 30,000 models, swap sampling incorporates cross-validation in the sense that predictions are tested across independent training and validation samples. Yet, models are also developed across these independent samples and by modelers who are blinded to one another. When there is a mismatch in these models developed across these swapped training and validation samples as happens quite frequently, MAQC-II shows that this will be much more predictive of poor external predictive validity than traditional cross-validation.
The reason for the success of the swapped sampling is a built-in control for human biases in model building. In addition to placing too much faith in predictions that may vary across modelers and lead to poor external validity due to these confounding modeler effects, these are some other ways that cross-validation can be misused:
• By performing an initial analysis to identify the most informative features using the entire data set – if feature selection or model tuning is required by the modeling procedure, this must be repeated on every training set. Otherwise, predictions will certainly be upwardly biased.[30] If cross-validation is used to decide which features to use, an inner cross-validation to carry out the feature selection on every training set must be performed.[31]
• Performing mean-centering, rescaling, dimensionality reduction, outlier removal or any other data-dependent preprocessing using the entire data set. While very common in practice, this has been shown to introduce biases into the cross-validation estimates.[32]
• By allowing some of the training data to also be included in the test set – this can happen due to "twinning" in the data set, whereby some exactly identical or nearly identical samples are present in the data set. To some extent twinning always takes place even in perfectly independent training and validation samples. This is because some of the training sample observations will have nearly identical values of predictors as validation sample observations. And some of these will correlate with a target at better than chance levels in the same direction in both training and validation when they are actually driven by confounded predictors with poor external validity. If such a cross-validated model is selected from a k-fold set, human confirmation bias will be at work and determine that such a model has been validated. This is why traditional cross-validation needs to be supplemented with controls for human bias and confounded model specification like swap sampling and prospective studies.
Cross validation for time-series models
Since the order of the data is important, cross-validation might be problematic for time-series models. A more appropriate approach might be to use rolling cross-validation.[33]
However, if performance is described by a single summary statistic, it is possible that the approach described by Politis and Romano as a stationary bootstrap[34] will work. The statistic of the bootstrap needs to accept an interval of the time series and return the summary statistic on it. The call to the stationary bootstrap needs to specify an appropriate mean interval length.
Applications
Cross-validation can be used to compare the performances of different predictive modeling procedures. For example, suppose we are interested in optical character recognition, and we are considering using either a Support Vector Machine (SVM) or k-nearest neighbors (KNN) to predict the true character from an image of a handwritten character. Using cross-validation, we could objectively compare these two methods in terms of their respective fractions of misclassified characters. If we simply compared the methods based on their in-sample error rates, one method would likely appear to perform better, since it is more flexible and hence more prone to overfitting compared to the other method.
Cross-validation can also be used in variable selection.[35] Suppose we are using the expression levels of 20 proteins to predict whether a cancer patient will respond to a drug. A practical goal would be to determine which subset of the 20 features should be used to produce the best predictive model. For most modeling procedures, if we compare feature subsets using the in-sample error rates, the best performance will occur when all 20 features are used. However under cross-validation, the model with the best fit will generally include only a subset of the features that are deemed truly informative.
A recent development in medical statistics is its use in meta-analysis. It forms the basis of the validation statistic, Vn which is used to test the statistical validity of meta-analysis summary estimates.[36] It has also been used in a more conventional sense in meta-analysis to estimate the likely prediction error of meta-analysis results.[37]
See also
Wikimedia Commons has media related to Cross-validation (statistics).
• Boosting (machine learning)
• Bootstrap aggregating (bagging)
• Out-of-bag error
• Bootstrapping (statistics)
• Leakage (machine learning)
• Model selection
• Stability (learning theory)
• Validity (statistics)
Notes and references
1. Piryonesi S. Madeh; El-Diraby Tamer E. (2020-03-01). "Data Analytics in Asset Management: Cost-Effective Prediction of the Pavement Condition Index". Journal of Infrastructure Systems. 26 (1): 04019036. doi:10.1061/(ASCE)IS.1943-555X.0000512. S2CID 213782055.
2. Allen, David M (1974). "The Relationship between Variable Selection and Data Agumentation and a Method for Prediction". Technometrics. 16 (1): 125–127. doi:10.2307/1267500. JSTOR 1267500.
3. Stone, M (1974). "Cross-Validatory Choice and Assessment of Statistical Predictions". Journal of the Royal Statistical Society, Series B (Methodological). 36 (2): 111–147. doi:10.1111/j.2517-6161.1974.tb00994.x. S2CID 62698647.
4. Stone, M (1977). "An Asymptotic Equivalence of Choice of Model by Cross-Validation and Akaike's Criterion". Journal of the Royal Statistical Society, Series B (Methodological). 39 (1): 44–47. doi:10.1111/j.2517-6161.1977.tb01603.x. JSTOR 2984877.
5. Geisser, Seymour (1993). Predictive Inference. New York, NY: Chapman and Hall. ISBN 978-0-412-03471-8.
6. Kohavi, Ron (1995). "A study of cross-validation and bootstrap for accuracy estimation and model selection". Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence. San Mateo, CA: Morgan Kaufmann. 2 (12): 1137–1143. CiteSeerX 10.1.1.48.529.
7. Devijver, Pierre A.; Kittler, Josef (1982). Pattern Recognition: A Statistical Approach. London, GB: Prentice-Hall. ISBN 0-13-654236-0.
8. Galkin, Alexander (November 28, 2011). "What is the difference between test set and validation set?". Retrieved 10 October 2018.
9. "Newbie question: Confused about train, validation and test data!". Archived from the original on 2015-03-14. Retrieved 2013-11-14.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
10. Cawley, Gavin C.; Talbot, Nicola L. C. (2010). "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation" (PDF). 11. Journal of Machine Learning Research: 2079–2107. {{cite journal}}: Cite journal requires |journal= (help)
11. Grossman, Robert; Seni, Giovanni; Elder, John; Agarwal, Nitin; Liu, Huan (2010). "Ensemble Methods in Data Mining: Improving Accuracy Through Combining Predictions". Synthesis Lectures on Data Mining and Knowledge Discovery. Morgan & Claypool. 2: 1–126. doi:10.2200/S00240ED1V01Y200912DMK002.
12. Trippa, Lorenzo; Waldron, Levi; Huttenhower, Curtis; Parmigiani, Giovanni (March 2015). "Bayesian nonparametric cross-study validation of prediction methods". The Annals of Applied Statistics. 9 (1): 402–428. arXiv:1506.00474. Bibcode:2015arXiv150600474T. doi:10.1214/14-AOAS798. ISSN 1932-6157. S2CID 51943497.
13. Celisse, Alain (1 October 2014). "Optimal cross-validation in density estimation with the $L^{2}$-loss". The Annals of Statistics. 42 (5): 1879–1910. arXiv:0811.0802. doi:10.1214/14-AOS1240. ISSN 0090-5364. S2CID 17833620.
14. Airola, A.; Pahikkala, T.; Waegeman, W.; De Baets, Bernard; Salakoski, T. (2011-04-01). "An experimental comparison of cross-validation techniques for estimating the area under the ROC curve". Computational Statistics & Data Analysis. 55 (4): 1828–1844. doi:10.1016/j.csda.2010.11.018.
15. Molinaro, A. M.; Simon, R.; Pfeiffer, R. M. (2005-08-01). "Prediction error estimation: a comparison of resampling methods". Bioinformatics. 21 (15): 3301–3307. doi:10.1093/bioinformatics/bti499. ISSN 1367-4803. PMID 15905277.
16. McLachlan, Geoffrey J.; Do, Kim-Anh; Ambroise, Christophe (2004). Analyzing microarray gene expression data. Wiley.
17. "Elements of Statistical Learning: data mining, inference, and prediction. 2nd Edition". web.stanford.edu. Retrieved 2019-04-04.
18. Vanwinckelen, Gitte (2 October 2019). On Estimating Model Accuracy with Repeated Cross-Validation. pp. 39–44. ISBN 9789461970442. {{cite book}}: |website= ignored (help)
19. Soper, Daniel S. (2021). "Greed Is Good: Rapid Hyperparameter Optimization and Model Selection Using Greedy k-Fold Cross Validation" (PDF). Electronics. 10 (16): 1973. doi:10.3390/electronics10161973.
20. "Cross Validation". Retrieved 11 November 2012.
21. Arlot, Sylvain; Celisse, Alain (2010). "A survey of cross-validation procedures for model selection". Statistics Surveys. 4: 40–79. arXiv:0907.4728. doi:10.1214/09-SS054. S2CID 14332192. In brief, CV consists in averaging several hold-out estimators of the risk corresponding to different data splits.
22. Dubitzky, Werner; Granzow, Martin; Berrar, Daniel (2007). Fundamentals of data mining in genomics and proteomics. Springer Science & Business Media. p. 178.
23. Kuhn, Max; Johnson, Kjell (2013). Applied Predictive Modeling. New York, NY: Springer New York. doi:10.1007/978-1-4614-6849-3. ISBN 9781461468486.
24. Cantzler, H. "Random sample consensus (ransac)." Institute for Perception, Action and Behaviour, Division of Informatics, University of Edinburgh (1981). http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.3035&rep=rep1&type=pdf
25. Hoornweg, Victor (2018). Science: Under Submission. Hoornweg Press. ISBN 978-90-829188-0-9.
26. Christensen, Ronald (May 21, 2015). "Thoughts on prediction and cross-validation" (PDF). Department of Mathematics and Statistics University of New Mexico. Retrieved May 31, 2017.
27. Efron, Bradley; Tibshirani, Robert (1997). "Improvements on cross-validation: The .632 + Bootstrap Method". Journal of the American Statistical Association. 92 (438): 548–560. doi:10.2307/2965703. JSTOR 2965703. MR 1467848.
28. Stone, Mervyn (1977). "Asymptotics for and against cross-validation". Biometrika. 64 (1): 29–35. doi:10.1093/biomet/64.1.29. JSTOR 2335766. MR 0474601.
29. Consortium, MAQC (2010). "The Microarray Quality Control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models". Nature Biotechnology. London: Nature Publishing Group. 28 (8): 827–838. doi:10.1038/nbt.1665. PMC 3315840. PMID 20676074.
30. Bermingham, Mairead L.; Pong-Wong, Ricardo; Spiliopoulou, Athina; Hayward, Caroline; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Agakov, Felix; Navarro, Pau; Haley, Chris S. (2015). "Application of high-dimensional feature selection: evaluation for genomic prediction in man". Sci. Rep. 5: 10312. Bibcode:2015NatSR...510312B. doi:10.1038/srep10312. PMC 4437376. PMID 25988841.
31. Varma, Sudhir; Simon, Richard (2006). "Bias in error estimation when using cross-validation for model selection". BMC Bioinformatics. 7: 91. doi:10.1186/1471-2105-7-91. PMC 1397873. PMID 16504092.
32. Moscovich, Amit; Rosset, Saharon (1 September 2022). "On the Cross-Validation Bias due to Unsupervised Preprocessing". Journal of the Royal Statistical Society Series B: Statistical Methodology. 84 (4): 1474–1502. arXiv:1901.08974. doi:10.1111/rssb.12537. S2CID 215745385.
33. Bergmeir, Christopher; Benitez, Jose (2012). "On the use of cross-validation for time series predictor evaluation". Information Sciences. 191: 192–213. doi:10.1016/j.ins.2011.12.028 – via Elsevier Science Direct.
34. Politis, Dimitris N.; Romano, Joseph P. (1994). "The Stationary Bootstrap". Journal of the American Statistical Association. 89 (428): 1303–1313. doi:10.1080/01621459.1994.10476870. hdl:10983/25607.
35. Picard, Richard; Cook, Dennis (1984). "Cross-Validation of Regression Models". Journal of the American Statistical Association. 79 (387): 575–583. doi:10.2307/2288403. JSTOR 2288403.
36. Willis BH, Riley RD (2017). "Measuring the statistical validity of summary meta-analysis and meta-regression results for use in clinical practice". Statistics in Medicine. 36 (21): 3283–3301. doi:10.1002/sim.7372. PMC 5575530. PMID 28620945.
37. Riley RD, Ahmed I, Debray TP, Willis BH, Noordzij P, Higgins JP, Deeks JJ (2015). "Summarising and validating test accuracy results across multiple studies for use in clinical practice". Statistics in Medicine. 34 (13): 2081–2103. doi:10.1002/sim.6471. PMC 4973708. PMID 25800943.
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
| Wikipedia |
A learning automata-based adaptive uniform fractional guard channel algorithm
Hamid Beigy1 &
M. R. Meybodi2
The Journal of Supercomputing volume 71, pages871–893(2015)Cite this article
In this paper, we propose an adaptive call admission algorithm based on learning automata. The proposed algorithm uses a learning automaton to specify the acceptance/rejection of incoming new calls. It is shown that the given adaptive algorithm converges to an equilibrium point which is also optimal for uniform fractional channel policy. To study the performance of the proposed call admission policy, the computer simulations are conducted. The simulation results show that the level of QoS is satisfied by the proposed algorithm and the performance of given algorithm is very close to the performance of uniform fractional guard channel policy which needs to know all parameters of input traffic. The simulation results also confirm the analysis of the steady-state behaviour.
Ramjee R, Towsley D, Nagarajan R (1997) On optimal call admission control in cellular networks. Wirel Netw 3:29–41
Hong D, Rappaport S (1986) Traffic modelling and performance analysis for cellular mobile radio telephone systems with prioritized and non-prioritized handoffs procedure. IEEE Trans Veh Technol 35:77–92
Haring G, Marie R, Puigjaner R, Trivedi K (2001) Loss formulas and their application to optimization for cellular networks. IEEE Trans Veh Technol 50:664–673
Beigy H, Meybodi MR (2004) A new fractional channel policy. J High Speed Netw 13:25–36
Yoon CH, Kwan C (1993) Performance of personal portable radio telephone systems with and without guard channels. IEEE J Sel Areas Commun 11:911–917
Guern R (1988) Queuing-blocking system with two arrival streams and guard channels. IEEE Trans Commun 36:153–163
Li B, Li L, Li B, Sivalingam KM, Cao X-R (2004) Call admission control for voice/data integrated cellular networks: performance analysis and comparative study. IEEE J Sel Areas Commun 22:706–718
Chen X, Li B, Fang Y (2005) A dynamic multiple-threshold bandwidth reservation (DMTBR) scheme for QoS provisioning in multimedia wireless networks. IEEE Trans Wirel Commun 4:583–592
Beigy H, Meybodi MR (2005) A general call admission policy for next generation wireless networks. Comput Commun 28:1798–1813
Beigy H, Meybodi MR (2004) Adaptive uniform fractional channel algorithms. Iran J Electr Comput Eng, 3:47–53
Beigy H, Meybodi MR (2005) An adaptive call admission algorithm for cellular networks. Electr Comput Eng 31:132–151
Beigy H, Meybodi MR (2008) Asynchronous cellular learning automata. Automatica 44:1350–1357
Beigy H, Meybodi MR (2011) Learning automata based dynamic guard channel algorithms. J Comput Electr Eng 37(4):601–613
Baccarelli E, Cusani R (1996) Recursive Kalman-type optimal estimation and detection of hidden markov chains. Signal Process 51:55–64
Baccarelli E, Biagi M (2003) Optimized power allocation and signal shaping for interference-limited multi-antenna ad hoc networks, vol 2775 of Springer lecture notes in computer science, Springer, pp 138–152
Beigy H, Meybodi MR (2009) Cellular learning automata based dynamic channel assignment algorithms. Int J Comput Intell Appl 8(3):287–314
Srikantakumar PR, Narendra KS (1982) A learning model for routing in telephone networks. SIAM J Control Optim 20:34–57
Nedzelnitsky OV, Narendra KS (1987) Nonstationary models of learning automata routing in data communication networks. IEEE Trans Syst Man Cybern. SMC–17:1004–1015
Oommen BJ, de St Croix EV (1996) Graph partitioning using learning automata. IEEE Trans Comput 45:195–208
Beigy H, Meybodi MR (2006) Utilizing distributed learning automata to solve stochastic shortest path problems. Int J Uncertain Fuzziness Knowl Based Syst 14:591–615
Oommen BJ, Roberts TD (2000) Continuous learning automata solutions to the capacity assignment problem. IEEE Trans Comput 49:608–620
Moradabadi B, Beigy H (2014) A new real-coded Bayesian optimization algorithm based on a team of learning automata for continuous optimization. Genetic programming and evolvable machines 15:169–193
Meybodi MR, Beigy H (2001) Neural network engineering using learning automata: determining of desired size of three layer feedforward neural networks. J Fac Eng 34:1–26
Beigy H, Meybodi MR (2001) Backpropagation algorithm adaptation parameters using learning automata. Int J Neural Syst 11:219–228
Oommen BJ, Hashem MK (2013) Modeling the learning process of the teacher in a tutorial-like system using learning automata. IEEE Trans Syst Man Cybern Part B Cybern 43(6):2020–2031
Yazidi A, Granmo OC, Oommen BJ (2013) Learning automaton based on-line discovery and tracking of spatio-temporal event patterns. IEEE Trans Syst Man Cybern Part B Cybern 43(3):1118–1130
Narendra KS, Thathachar KS (1989) Learning automata: an Introduction. Printice, New York
Srikantakumar P (1980) Learning models and adaptive routing in telephone and data communication networks. PhD thesis, department of electrical engineering, University of Yale, USA
Norman MF (1972) Markovian process and learning models. Academic Press, New York
Mood AM, Grabill FA, Bobes DC (1963) Introduction to the theory of statistis. McGraw-Hill
The authors would like to thank the anonymous reviewers for their valuable comments and suggestions which improved the paper.
Department of Computer Engineering, Sharif University of Technology, Tehran, Iran
Hamid Beigy
Department of Computer Engineering, Amirkabir University of Technology, Tehran, Iran
M. R. Meybodi
Search for Hamid Beigy in:
Search for M. R. Meybodi in:
Correspondence to Hamid Beigy.
Appendix: Proof of Theorems and Lemmas
In this appendix, we give the proof of some lemmas and theorems given in this paper.
Proof of Lemma 1
Before we begin to prove the lemma, we introduce some definitions and notations. To count how many calls are arrived, we introduce concept of local time for each type of calls. The local time for each type of calls starts with 0 and incremented by 1 when a call of given type is arrived. Let us to define \(n^n\) and \(n^h\) as the local times for new and hand-off calls, respectively. Then, we define two sequences of random variables \(n^n_m\) (\(n^n_1 < n^n_2<\cdots \)) and \(n^h_m\) (\(n^h_1 < n^h_2 < \cdots \)), where \(n^n_m(n^h_m)\) is the global time when the \(m^{th}\) new (hand-off) call is arrived.
The proof for penalty probability of \(c_1(p)\) is trivial, because action \(\mathrm{ACCEPT}\) is penalized when all allocated channels are busy. Since the probability of all channels being busy is equal to \(P_C\), then \(c_1(p)\) is equal to \(P_C\). To find expression for \(c_2(p)\), we define \(X_n\) as the indicator of dropping of a hand-off call at the hand-off local time \(n\), where \(X_n=1\) if a hand-off call arrives at hand-off local time \(n^h=n\) and dropped and \(X_n=0\) if a hand-off call arrives at hand-off local time \(n^h=n\) and accepted. Since in interval \([n,n+1]\), it is possible that \(M \ge 0\) new calls to be accepted or \(N \ge 0\) calls to be completed, then the state of the Markov chain describing cell at hand-off local time \(n+1\) is independent of its state at the hand-off local time \(n\) when \(N+M > 0\). Although there is an exception \(N+M=0\), which we ignore in our analysis due to the violation of Markov chain properties. Therefore, \(X_1,X_2,\ldots ,X_n\) are independent identically distributed (i.i.d) random variables with the following first- and second-order statistics.
$$\begin{aligned} \mathrm{E}\left[ X_n\right]&= \sum _{k=0}^{C} k P_k= \rho \gamma \left[ 1-P_C\right] .\end{aligned}$$
$$\begin{aligned} \mathrm{Var}\left[ X_n\right]&= \mathrm{E} \left[ X^2_k\right] - \left( \mathrm{E} \left[ X_k\right] \right) ^2= \rho \gamma \left[ 1-P_C\right] \left[ 1+\rho \gamma P_C\right] - (\rho \gamma )^2P_{C-1}.\nonumber \\ \end{aligned}$$
Using the central limit theorem \(\bar{X}_n=\hat{B}_h=\frac{1}{n}\sum _{k=0}^nX_k\) is a random variable with normal distribution \((\hat{B}_h \sim N(\mu _b,\sigma _b))\) with the following mean and variance [30].
$$\begin{aligned} \mu _b&= \mathrm{E}\left[ \hat{B}_h\right] =\mathrm{E}\left[ \bar{X}_n\right] = \mathrm{E}\left[ X_n\right] = \rho \gamma \left[ 1-P_C\right] .\end{aligned}$$
$$\begin{aligned} \sigma _b&= \mathrm{Var}\left[ \hat{B}_h\right] = \mathrm{Var}\left[ \bar{X}_n\right] = \frac{\mathrm{Var}\left[ X_n\right] }{n}, \nonumber \\&= \frac{\rho \gamma \left[ 1-P_C\right] \left[ 1+\rho \gamma P_C\right] - (\rho \gamma )^2P_{C-1}}{n}. \end{aligned}$$
Thus, the value of penalty probability of \(c_2(p)\) is equal to
$$\begin{aligned} c_2(p)&= \mathrm{Prob} \left[ \hat{B}_h < p_h\right] , \\&= \frac{1}{\sqrt{2\pi }\sigma _b}\int _{-\infty }^{p_h}e^{-\frac{1}{2}\left( \frac{x-\mu _b}{\sigma _b}\right) ^2}dx \end{aligned}$$
which completes the proof of this lemma. \(\square \)
The proofs for items one through three are trivial using Eq. (7) and we only give the proof of item 4. From Eq. (7) and when \(\rho < C\), we have
$$\begin{aligned} \frac{\partial c_1(p)}{\partial p_1}&= (1-a)P_C\left[ \frac{C}{\gamma }-\rho (1-P_C)\right] >0, \end{aligned}$$
$$\begin{aligned} \frac{\partial c_1(p)}{\partial p_2}&= -(1-a)P_C\left[ \frac{C}{\gamma }-\rho (1-P_C)\right] <0. \end{aligned}$$
Using Eq. (7), we obtain
$$\begin{aligned} \frac{\partial c_2(p)}{\partial p_2}&= \frac{1}{\sigma _b \sqrt{2\pi }}\left[ e^{\frac{-1}{2} \left( \frac{p_h - \mu _b}{\sigma _b}\right) ^2}\left\{ \left( \frac{\mu -p_h}{\sigma _b}\right) \frac{\partial \sigma _b}{\partial p_2}-\frac{\partial \mu _b}{\partial p_2}\right\} \right. \nonumber \\&\quad \left. -\frac{2}{\sigma _b}\frac{\partial \sigma _b}{\partial p_2}\int _{-\infty }^{p_h}e^{\frac{-1}{2} \left( \frac{x - \mu _b}{\sigma _b}\right) ^2} dx\right] , \\ \frac{\partial c_2(p)}{\partial p_1}&= \frac{-1}{\sigma _b \sqrt{2\pi }}\left[ e^{\frac{-1}{2} \left( \frac{p_h - \mu _b}{\sigma _b}\right) ^2}\left\{ \left( \frac{\mu -p_h}{\sigma _b}\right) \frac{\partial \sigma _b}{\partial p_1}-\frac{\partial \mu _b}{\partial p_1}\right\} \right. \nonumber \\&\quad \left. -\frac{2}{\sigma _b}\frac{\partial \sigma _b}{\partial p_1}\int _{-\infty }^{p_h}e^{\frac{-1}{2} \left( \frac{x - \mu _b}{\sigma _b}\right) ^2} dx\right] . \end{aligned}$$
Increasing \(p_2\), decreases the probability of accepting new calls and hence the number of busy channels decreased. Therefore, the dropping probability of hand-off calls is decreased or \(c_2(p)=\mathrm{Prob} \left[ \hat{B}_h < p_h\right] \) is increased. Thus, we have
$$\begin{aligned} \frac{\partial c_2(p)}{\partial p_2}&> 0 \end{aligned}$$
$$\begin{aligned} \frac{\partial c_2(p)}{\partial p_1}&< 0. \end{aligned}$$
However, by choosing the proper value for parameters, condition \(\frac{\partial c_2(p)}{\partial p_2} > 0\) is also satisfied. From Eqs. (22) and (24), Eq. (9) is concluded, from Eqs. (22) and (25), Eq. (10) is concluded and from Eqs. (23) and (24), Eq. (11) is concluded. This completes the proof of this lemma. \(\square \)
Consider \(f(p)\) at its two end points
$$\begin{aligned} f(p) = \left\{ \begin{array}{lll} c_2(0,1) &{} p_1=0 \\ -c_1(1,0) &{} p_1=1. \end{array} \right. \end{aligned}$$
Since \(f(p)\) is a continuous function of \(p_1\) and \(p_2\), there exists at least a \(p^*\) such that \(f(p^*)=0\). For proving the uniqueness of \(p^*\), the derivative of \(f(p)\) with respect to \(p_1\) is computed and then using Lemma 2, we obtain
$$\begin{aligned} \frac{\partial f(p)}{\partial p_1}&= \frac{\partial c_2(p)}{\partial p_1} - \left( 1+p_1\right) \left( c_1+c_2\right) ,\\&< 0. \end{aligned}$$
Since the derivative of \(f(p)\) with respect to \(p_1\) is negative, \(f(p)\) is a strictly decreasing function of \(p_1\). Thus there exists one and only one point \(p^*\) for which function \(f(p)\) crosses the horizontal line and hence the lemma. \(\square \)
Define \(p^2=p^Tp\) for vector \(p\). Let \(p=p_1\) and
$$\begin{aligned} g(p) = \left\{ \begin{array}{ll} \frac{w(p)}{p^*-p} &{} p \ne p^* \\ \left. -\frac{\partial w(p)}{\partial p}\right| _{p=p^*} &{} p=p^* \end{array} \right. \end{aligned}$$
Since \(w(p) < 0\) when \(p > p^*\) and \(w(p) > 0\) when \(p<p^*\), \(g(p)\) is positive and continuous in interval \([0,1]\). Hence, there exists a \(R > 0\) such that \(g(p) \ge R\). Thus, we have
$$\begin{aligned} \left[ p^* - p(n)\right] w(p(n))&= \left[ p^*-p(n)\right] ^2g(p(n)), \nonumber \\&\ge R \left[ p^*-p(n)\right] ^2. \end{aligned}$$
for all probability \(p\), then computing
$$\begin{aligned} \left[ p(n+1)-p^*\right] ^2= \left[ p(n)-p^*\right] ^2 + 2\left[ p(n)-p^*\right] \Delta p(n) + \Delta p^2(n) \end{aligned}$$
and taking expectation on both sides, cancelling \(\mathrm{E}\left[ p(n)-p^*\right] ^2\) and dividing by \(2a\), we obtain
$$\begin{aligned} \mathrm{E}\left[ \left\{ p(n)-p^*\right\} \frac{\Delta p(n)}{a}\right] + \frac{a}{2} \mathrm{E} \left[ \frac{\Delta p^2(n)}{a^2}\right] =0, \end{aligned}$$
$$\begin{aligned} \mathrm{E}\left[ \left\{ p(n)-p^*\right\} w\left( p(n)\right) \right] + \frac{a}{2} \mathrm{E} \left[ \tilde{S} \left( p(n)\right) \right] =0. \end{aligned}$$
Since, we have only bounded variables, \(\tilde{S} \left( p(n)\right) \) is also bounded; thus, there exists a \(K > 0\) such that \(\mathrm{E}\left[ \tilde{S} \left( p(n)\right) \right] \le K\). Hence, we obtain
$$\begin{aligned} \mathrm{E}\left[ \left\{ p^*-p(n)\right\} w\left( p(n)\right) \right]&= \frac{a}{2} \mathrm{E} \left[ \tilde{S} \left( p(n)\right) \right] , \\&\le K a. \end{aligned}$$
Using this Eq. (28), we obtain
$$\begin{aligned} \mathrm{E} \left[ p^*-p(n)\right] ^2&\le K \mathrm{E} \left[ \left\{ p^* - p(n)\right\} w(p(n))\right] ,\\&\le Ka, \\&= O(a). \end{aligned}$$
and hence the lemma.\(\square \)
To prove Eq. (15), let us to define
$$\begin{aligned} \zeta =\frac{\mathrm{E} \left[ \Delta z(n)|z(n)\right] }{\sqrt{a}}=\frac{\mathrm{E} \left[ \Delta p(n)|z(n)\right] }{a} = w\left( p(n)\right) -w\left( p^*\right) . \end{aligned}$$
Since \(w(.)\) is Lipshitz with bound \(\beta \), we have \(|w\left( p(n)\right) -w\left( p^*\right) | \le K |p(n) - p^*|,\) where \(K>0\) is a constant. Using this Eq. (29), we obtain
$$\begin{aligned} |\zeta | \le K |p(n) - p^*| \le K \sqrt{a} |z(n)|. \end{aligned}$$
$$\begin{aligned} h(\lambda )=w(x+\lambda (y-x)) \end{aligned}$$
where \(\lambda \in [0,1]\). It follows that
$$\begin{aligned} h'(\lambda )&= \frac{\partial h(\lambda )}{\partial \lambda }, \nonumber \\&= w'(x+\lambda (y-x))(y-x). \end{aligned}$$
Since \(h'(.)\) is continuous, we have
$$\begin{aligned} w(y)-w(x) = h(1) - h(0) = \int _0^1w'(x+\lambda (y-x))[y-x]d.\lambda \end{aligned}$$
Subtracting \(w'(x)(y-x)\) from both sides of the above equation, we obtain
$$\begin{aligned} w(y)-w(x)-w'(x)[y-x]=\int _0^1\left[ w'(x+\lambda (y-x))-w'(x)\right] [y-x]d\lambda \end{aligned}$$
Since \(w(.)\) is Lipschitz with bound \(\beta \), we obtain
$$\begin{aligned} w(y)-w(x)-w'(x)(y-x) \le \frac{\beta }{2}|y-x|^2. \end{aligned}$$
Substituting \(y\) with \(p(n)\) and \(x\) with \(p^*\) in the above equation, we obtain
$$\begin{aligned} w(p(n))-w(p^*)-w'(p^*)(p(n)-p^*)&\le K |p(n)-p^*|^2, \nonumber \\ w(p(n))-w(p^*)-\sqrt{a}w'(p^*)z(n)&\le K a |z(n)|^2. \end{aligned}$$
Using this Eqs. (29) and (30) and Lemma 4, we obtain
$$\begin{aligned} |\zeta -\sqrt{a}w'(p^*)z(n)|&\le K a |z(n)|^2, \nonumber \\&\le K |p(n)-p^*|^2, \nonumber \\&\le K a. \end{aligned}$$
Multiplying both sides of the above equation by \(\sqrt{a}\), we obtain
$$\begin{aligned} \left| \sqrt{a}\zeta -aw'(p^*)z(n)\right| \le K a^{3/2}, \end{aligned}$$
$$\begin{aligned} \left| \mathrm{E} [\Delta z(n)|z(n)]- aw'(p^*)z(n)\right| \le K \sqrt{a}, \end{aligned}$$
which implies Eq. (15). To derive Eq. (16), let us to define
$$\begin{aligned} \eta =\frac{\mathrm{E} \left[ \Delta z^2(n)|z(n)\right] }{a} = S(p(n))= \tilde{S}(p(n)) + \zeta ^2. \end{aligned}$$
By subtracting \(\tilde{S}(p(n))\) from both sides of the above equation, we obtain
$$\begin{aligned} |\eta - \tilde{S}(p^*)|&= |\tilde{S}(p(n)) + \zeta ^2 - \tilde{S}(p^*)| \nonumber \\&\le |\tilde{S}(p(n)) - \tilde{S}(p^*)|+ |\zeta ^2|. \end{aligned}$$
Since \(\tilde{S}(.)\) is Lipschitz, we have
$$\begin{aligned} |\tilde{S}(p(n)) - \tilde{S}(p^*)| \le K |p(n) - p^*|. \end{aligned}$$
Substituting this Eq. (30) into Eq. (35), we obtain
$$\begin{aligned} |\eta - \tilde{S}(p^*)| \le K |p(n) - p^*| + K |p(n) - p^*|^2 \end{aligned}$$
Using Lemma 4, we have \(\mathrm{E}\left[ p(n)-p^*\right] ^2 \le Ka \) and \(\mathrm{E}\left[ p(n)-p^*\right] \le K\sqrt{a} \). Thus, we obtain \(|\eta - \tilde{S}(p^*)| = o(a)\). Hence, as a consequence, we have \(\mathrm{E}|\eta - \tilde{S}(p^*)| \rightarrow 0\) as \(a \rightarrow 0\), which confirms Eq. (16). Equation (17) follows by observing that
$$\begin{aligned} \mathrm{E} \left[ \left. \left| \frac{\Delta p(n) }{a}\right| ^3\right| p(n)=p\right] =\xi (p) < \xi < \infty \end{aligned}$$
Substituting Eq. (17) into the above equation, we obtain
$$\begin{aligned} \mathrm{E} \left[ \left. \left| \frac{\Delta p(n)}{a}\right| ^3\right| p(n) \right]&< \xi \nonumber \\ \mathrm{E} \left[ \left| \frac{|\Delta z(n)|^3}{a^{3/2}}\right| p(n)\right]&< \xi \nonumber \\ \mathrm{E} \left[ \left. |\Delta z(n)|^3\right| p(n)\right]&< \xi a ^{3/2} \end{aligned}$$
where \(\xi a ^{3/2} \rightarrow 0\) as \(a \rightarrow 0\). This completes the proof of this lemma. \(\square \)
Proof of Theorem 2
Let \(h(u)=\mathrm{E}\left[ e^{iuz(n)}\right] \) be the characteristic function of \(z(n)\). Then using the third-order taylor's expansion of \(e^{iu}\) for real \(u\), we obtain
$$\begin{aligned} \mathrm{E}\left[ \left. e^{iuz(n)}\right| z(n)\right]&= 1 + iu \mathrm{E}[\Delta z(n)|z(n)] - \frac{u^2}{2} \mathrm{E}\left[ \left. \Delta z^2(n)\right| z(n)\right] \nonumber \\&\quad + k |u|^3 \mathrm{E}\left[ \left. |\Delta z(n)|^3 \right| z(n)\right] , \end{aligned}$$
where \(k \le 1/6\); thus
$$\begin{aligned} h(u)&= \mathrm{E}\left[ e^{iuz(n+1)}\right] , \nonumber \\&= \mathrm{E}\left[ e^{iuz(n)} \mathrm{E}\left( \left. e^{iu\Delta z(n)}\right| z(n)\right) \right] ,\nonumber \\&= h(u) + iu \mathrm{E}\left[ e^{iuz(n)}\mathrm{E}\left\{ \Delta z(n)|z(n)\right\} \right] , \nonumber \\&- \frac{u^2}{2} \mathrm{E}\left[ e^{iuz(n)}\mathrm{E}\left\{ \left. \Delta z^2(n)\right| z(n)\right\} \right] + k |u|^3 \mathrm{E}\left[ k e^{iuz(n)}\mathrm{E}\left\{ \left. |\Delta z(n)|^3 \right| z(n)\right\} \right] . \nonumber \\ \end{aligned}$$
Cancelling \(h(u)\) and dividing by \(u\), results
$$\begin{aligned}&i \mathrm{E}\left[ e^{iuz(n)}\mathrm{E}\left\{ \Delta z(n)|z(n)\right\} \right] - \frac{u}{2} \mathrm{E}\left[ e^{iuz(n)}\mathrm{E}\left\{ \left. \Delta z^2(n)\right| z(n)\right\} \right] \nonumber \\&\qquad +\, k |u|^2 \mathrm{E}\left[ k e^{iuz(n)}\mathrm{E}\left\{ \left. |\Delta z(n)|^3 \right| z(n)\right\} \right] =0. \end{aligned}$$
Thus, using estimates of Lemma 5, we have
$$\begin{aligned}&i a w'(p^*)\mathrm{E}\left[ e^{iuz(n)}z(n)\right] -\frac{u}{2} \tilde{S}(p^*) \mathrm{E}\left[ e^{iuz(n)}\right] +\mathrm{E}\left[ o(a)\right] \nonumber \\&\qquad +\,u\mathrm{E}\left[ o(a)\right] +u^2\mathrm{E}\left[ o(a)\right] =0. \end{aligned}$$
From Eqs. (15) and (17), it is evident that \(\mathrm{E}[|z(n)|] < \infty \) when \(a\) is small or
$$\begin{aligned} a w'(p^*)\frac{dh(u)}{du}-\frac{u}{2} \tilde{S}(p^*) h(u)+\mathrm{E}\left[ o(a)\right] +u\mathrm{E}\left[ o(a)\right] +u^2\mathrm{E}\left[ o(a)\right] =0. \end{aligned}$$
Dividing the above equation by \(aw'(p^*)\) and using fact \(w'(p^*)<0\), we obtain
$$\begin{aligned} \frac{dh(u)}{du}+u\frac{\tilde{S}(p^*)}{2 \left| w'(p^*)\right| } h(u)+\epsilon (u) = 0, \end{aligned}$$
$$\begin{aligned} \varphi = \sup _u \frac{|\epsilon (u)|}{1+u^2} \rightarrow 0, \end{aligned}$$
as \(a \rightarrow 0\). Since \(h(0)=1\), it follows that
$$\begin{aligned} h(u)=e^{-\frac{(u\sigma )^2}{2}}\left( 1-\int _0^ue^\frac{(ux)^2}{2}dx\right) , \end{aligned}$$
where \(\sigma ^2=\frac{\tilde{S}(p^*)}{2 \left| w'(p^*)\right| }\). But we have
$$\begin{aligned} \left| \int _0^{|u|}e^\frac{(ux)^2}{2}\epsilon (u)dx\right| \le \varphi \int _0^{|u|}e^\frac{(ux)^2}{2}\left( 1+x^2\right) dx \rightarrow 0, \end{aligned}$$
as \(a \rightarrow 0\); thus
$$\begin{aligned} h(u) \rightarrow e^{-\frac{(u\sigma )^2}{2}}. \end{aligned}$$
Then using the facts that each characteristic function determines the distribution uniquely and \(h(u)\) is characteristic function of \(N(0,\sigma ^2)\), thus we obtain
$$\begin{aligned} z(n) \sim N(0,\sigma ^2), \end{aligned}$$
and hence the theorem. \(\square \)
In the equilibrium state, the average penalty rates for both actions are equal or \(f_1(p^*)=f_2(p^*),\) which results \(c_1\pi ^*=c_2(1-\pi ^*)\). Thus we have
$$\begin{aligned} \pi ^* = \frac{\delta }{\delta +P_C}, \end{aligned}$$
where \(\delta =\mathrm{Prob} \left[ \hat{B}_h < p_h\right] \). Thus average number of blocked new calls, \(\bar{N}_n\), is equal to
$$\begin{aligned} \bar{N}_n&= \lambda _n \left[ 1-\pi ^*(1-P_C)\right] , \nonumber \\&= \lambda _n (1+\delta ) \frac{P_C}{P_C+\delta }. \end{aligned}$$
Computing derivative of \(\bar{N}_n\) with respect to \(\delta \) results
$$\begin{aligned} \frac{\partial \bar{N}_n}{\partial \delta }&= - \lambda _n \frac{P_C(1-P_C)}{(P_C+\delta )^2},\nonumber \\&< 0. \end{aligned}$$
Thus \(\bar{N}_n\) is a strictly decreasing function of \(\delta \). Since the adaptive UFC algorithm gives the higher priority to the hand-off calls, it attempts to minimize the dropping probability of hand-off calls. Using this fact and Eq. (40), it is evident that \(\bar{N}_n\) is minimized which results in minimization of the blocking probability of new calls and hence the theorem.\(\square \)
Beigy, H., Meybodi, M.R. A learning automata-based adaptive uniform fractional guard channel algorithm. J Supercomput 71, 871–893 (2015) doi:10.1007/s11227-014-1330-7
Issue Date: March 2015
Learning automata
Uniform fractional guard channel policy
Adaptive uniform fractional guard channel policy | CommonCrawl |
\begin{definition}[Definition:Image of Topological Space]
Let $T = \struct {S, \tau}$ and $Q = \struct {X, \tau'}$ be topological spaces.
Let $f: S \to X$ be a mapping.
The '''image (of the topological space $T$) of''' $f$ is defined as:
:$\Img f := Q_{f \sqbrk S} = \struct {f \sqbrk S, \tau'_{f \sqbrk S} }$
where $\tau'_{f \sqbrk S}$ denotes the subspace topology on $f \sqbrk S$.
\end{definition} | ProofWiki |
\begin{document}
\title{Group cohesion under individual regulatory constraints} \author{Delia Coculescu and Freddy Delbaen} \address{Institut f\"ur Banking und Finance, Universit\"at Z\"urich, Plattenstrasse
14, 8032 Z\"{u}rich, Switzerland} \address{Departement f\"ur Mathematik, ETH Z\"urich, R\"{a}mistrasse
101, 8092 Z\"{u}rich, Switzerland}
\address{
Institut f\"ur Mathematik,
Universit\"at Z\"urich, Winterthurerstrasse 190,
8057 Z\"urich, Switzerland} \date{\today} \thanks{The first author thanks the participants at the seminar Finance and Insurance at the University of Zurich, in particular Pablo Koch Medina for helpful interactions. } \maketitle
\begin{abstract} We consider a group consisting of $N$ business units. We suppose there are regulatory constraints for each unit, more precisely, the net worth of each business unit is required to belong to a set of acceptable risks, assumed to be a convex cone. Because of these requirements, there are less incentives to operate under a group structure, as creating one single business unit, or altering the liability repartition among units, may allow to reduce the required capital. We analyse the possibilities for the group to benefit from a diversification effect and economise on the cost of capital. We define and study the risk measures that allow for any group to achieve the minimal capital, as if it were a single unit, without altering the liability of business units, and despite the individual admissibility constraints. We call these risk measures cohesive risk measures. \end{abstract} \section{Introduction} We consider an insurance group structured in $N\geq 2$ business units. Each unit $i$ has some exogenous liability, modelled as a random variable $X_i\geq 0$ on a probability space $(\Omega,{\mathcal{F}},{\mathbb P})$. We suppose that the net worth of each unit is subject to constraints, e.g. that are set by a regulator, namely it needs to belong to a certain set ${\mathcal{A}}$ of ``acceptable positions''. We consider that ${\mathcal{A}}$ is a convex cone so that the functional $\rho:L^1(\Omega,{\mathcal{F}},{\mathbb P})\to {\mathbf R}$: $$ \rho(\xi)=\inf\{m\;:\; \xi+m\in{\mathcal{A}}\} $$is a coherent risk measure (see \cite{ADEH2}).
Whenever the aggregation of the units' liabilities is possible, the group is only required to hold the capital $\rho\(-\sum_{i=1}^NX_i\)$. By convexity of $\rho$ we have that $\rho\(-\sum_{i=1}^NX_i\)\leq \sum_{i=1}^N\rho\(-X_i\)$, which reflects the fact that the group would achieve a lower required capital as compared with $N$ separated entities with the same liabilities. In this situation, risk aggregation is beneficial because it reduces the capital.
In this paper we are assuming that there are legal or geographical limitations that prevent risk transfers or aggregation of liability to take place with the aim of reducing the regulatory capital. Each business unit must face the regulatory requirements individually. At the group level nevertheless, it is possible to manage the available capital and make certain monetary transfers to compensate for losses occurring at time 1 within the different business units. The set of all such possible monetary compensations will be called admissible payoffs. For instance, when at the group level the available capital is $m>0$, then the set of admissible payoffs will be denoted by ${\mathbb A}^{\mathbf X}(m)$; it contains the nonnegative, $N$ dimensional random vectors ${\mathbf Y}=(Y_1,...,Y_N)$ such that $\sum_iY_i=m$, and fulfilling some additional rules specifying the payment priority of the different units (as given in Definition \ref{defi} below).
In this framework, unit $i$ receives a payoff $Y_i$ so that its net worth is $Y_i-X_i$. The lowest overall capital that the group needs to hold when it has liability given by the vector ${\mathbf X}=(X_1,...,X_N)$ is: \begin{equation}\label{ka}
{\mathcal{K}}({\mathbf X}):=\inf\{m\geq 0\;|\; \exists {\mathbf Y}\in{\mathbb A}^{\mathbf X}(m),\forall i, Y_i-X_i\in{\mathcal{A}}\}. \end{equation} In general, when the business units are facing such individual admissibility constraints, it is the case that the cost of capital ${\mathcal{K}}({\mathbf X})$ is higher than the minimal cost obtained with aggregating the risks, $\rho\(-\sum_{i=1}^NX_i\)$. Hence, the existence of individual constraints reduces the benefit of being a group. In such circumstances the incentives are to organise the business differently, as a unique entity, or form some optimised subgroups, depending on the liability vector.
The topic of this paper, is to characterise the acceptability sets ${\mathcal{A}}$ that satisfy the property \begin{equation}\label{eq} {\mathcal{K}}({\mathbf X})=\rho\(-\sum_{i=1}^NX_i\),\;\forall {\mathbf X}\in (L^\infty)^N. \end{equation} We will show that the relation (\ref{eq}) is rather restrictive. We call the corresponding risk measures cohesive, as any group requires the same amount of capital as if it were a single entity, despite the impossibility to aggregate liability to take advantage of the convexity of the regulatory constraint. When the risk measure is cohesive, the group benefits of the maximal diversification gain, as if it were a single entity, even with individual capital constraints for the group members.
On the way of characterising the cohesive risk measures, we will show how admissible payoffs can be designed in order to offset the liability for each business unit and achieve an acceptable net worth.
Our problem formulation is connected to the topic of optimal risk transfers based on convex risk measures. Optimal risk transfers within a group is a topic that has been studied in a substantial body of literature. We refer to Heath and Ku \cite{HeaKu04}, Barrieu and El Karoui \cite{BarrElK05}, \cite{BarrElK05a}, Jouini et al. \cite{JouSchTou08}, Filipovi\'c and Kupper \cite{FilKup08}, Burgert and R\"uschendorf \cite{BurRus06}, Embrechts et al. \cite{{EmbLiuMao}}. In these papers, the problem is formulated generally as $$ \inf_{\boldsymbol \xi}\sum_{i=1}^N\rho_i(-\xi_i) $$ over all vectors $\boldsymbol \xi=(\xi_1,\cdots,\xi_N)$ satisfying $\sum_{i=1}^N\xi_i=\eta$ for some $\eta$ fixed. Note that each business unit may use a specific risk measure in this framework.
The main difference with the optimal risk transfer literature is to introduce individual risk admissibility constraints for every unit. At the same time, we consider that the liability at the level of each unit is not transferable among units and we introduce solvability constraints, namely that payments can only be made within the limits of the available capital. Further, when the group is insolvent, we introduce fixed rules for how the payments are to be made. This framework is similar to the one we have introduced in \cite{CocDel20}, where the question addressed was the fairness of insurance contracts in presence of default risk. With similar rules for payments in bankruptcy and admissibility conditions for the payments, we have shown that it is not possible in general to perfectly offset the default risk exposure of the insured agents by proposing them a benefit participation. The question of when such offsetting payments can take place was not addressed there and the current paper also brings clarifications in that context.
\section{Setup and main definitions} Let us introduce the mathematical setup more clearly. We work in a two date model: time 0 where everything is known and time 1, where randomness is present. Possible outcomes at time 1 are modelled as random variables on a probability space $(\Omega, {\mathcal{F}}, {\mathbb P})$, considered to be atomless. Unless otherwise specified, all equalities and inequalities involving random variables are to be considered in an ${\mathbb P} \;{\frenchspacing a.s.}~$ sense.
The space of risks occurring at time 1 is considered to be $L^\infty(\Omega, {\mathcal{F}}, {\mathbb P})$, simply denoted $L^\infty$, i.e., the collection of all essentially bounded random variables.
At time 0, a regulator measures risks by means of a convex functional $\rho$ fulfilling the properties detailed below.
\begin{defi} A mapping $\rho\colon L^\infty\rightarrow {\mathbb R}$ is called a coherent risk measure if the following properties hold:
\begin{enumerate}
\item if $ \xi\geq 0$ then $\rho(\xi)\le 0$;
\item $\rho$ is convex: for all $\xi,\eta\in L^\infty$, $0\le \lambda\le 1$ we have $\rho(\lambda \xi +(1-\lambda)\eta)\le \lambda \rho(\xi) + (1-\lambda) \rho(\eta)$;
\item for $a\in {\mathbb R}$ and $\xi\in L^\infty$, $\rho(\xi +a)=\rho(\xi) - a$;
\item for all $0\le \lambda\in {\mathbb R}$, $\rho(\lambda \xi)=\lambda \rho(\xi)$;
\item (the Fatou property) for any sequence $\xi_n\downarrow\xi$ (with $\xi_n\in L^\infty$) we have $\rho(\xi_n)\uparrow \rho(\xi)$.
\end{enumerate}
\end{defi}
We refer to \cite{ADEH2}, \cite{Pisa}, \cite{FDbook} for an interpretation of these mathematical properties and how they apply to the framework of risk regulation. The main idea is that the regulator only accepts risks $\xi$ that satisfy $\rho(\xi)\leq 0$, hence we say that a random variable $\xi$ is acceptable whenever $\rho(\xi)\le 0$. Remark that $\xi+\rho(\xi)$ is always acceptable, so that $\rho(\xi)$ is interpreted as the capital required for the risk $\xi$. If $\rho$ is coherent then the acceptability set $$ {\mathcal{A}}:=\{\xi\mid \rho(\xi)\le 0\} $$ is a convex cone.
The Fatou property allows to apply convex duality theory and establishes a one-to-one correspondence between a coherent risk measure and a convex closed set ${\mathcal{S}}$ consisting of probabilities which are absolutely continuous with respect to ${\mathbb P}$ (the so called scenario set of $\rho$): \begin{theo} If $\rho$ is coherent, there exists a convex closed set ${\mathcal{S}}\subset L^1$, consisting of probability measures, absolutely continuous with respect to ${\mathbb P}$, such that for all $\xi \in L^\infty$: $$ \rho(\xi)=\sup_{{\mathbb Q}\in{\mathcal{S}}} {\mathbb E}_{\mathbb Q}[-\xi].$$ Conversely each such a set ${\mathcal{S}}$ defines a coherent utility function. \end{theo}
We shall use the assumption that ${\mathcal{S}}$ is weakly compact, so that we will be able to replace the sup by a max. Indeed, as a direct application of James's theorem, weak compactness is equivalent to the nonemptyness of the subgradient of $\rho$ at any point: for every $\xi\in L^\infty$, $\nabla \rho(\xi)\neq \emptyset$, that is, there is a ${\mathbb Q}\in{\mathcal{S}}$ with $\rho(\xi)={\mathbb E}_{\mathbb Q}[-\xi]$.
An additional assumption that we will use is that the risk measure $\rho$ used by the regulator is commonotonic.
\begin{defi} We say that two random variables $\xi,\eta$ are commonotonic if there exist a random variable $\zeta$ as well as two non-decreasing functions $f,g\colon {\mathbb R}\rightarrow{\mathbb R}$ such that $\xi=f(\zeta)$ and $\eta=g(\zeta)$. \end{defi}
\begin{defi} We say that $\rho\colon L^\infty\rightarrow{\mathbb R}$ is commonotonic if for each couple $(\xi,\eta)$ of commonotonic random variables we have $\rho(\xi+\eta)=\rho(\xi)+\rho(\eta)$. \end{defi}
\begin{rem} Loosely speaking, two risks are commonotonic if they are bets on the same event. Indeed, $\xi$ and $\eta$ being nondecreasing functions of $\zeta$, neither of them is a hedge against the other. The commonotonicity of $\rho$ can therefore be seen as a translation of the rule: if there is no diversification, there is also no gain in putting these claims together. \end{rem} \begin{rem} If $\rho$ is commonotonic then for nonnegative random variables $f,g\in L^\infty$ satisfying ${\mathbb P}[f>0,g>0]=0$ we have that $\rho(f+g)=\rho(f)+\rho(g)$. In particular for $\xi\in L^\infty$: $\rho(\xi)=\rho(\xi^+)+\rho(-\xi^-)$. We also have that for ${\mathbb Q}\in{\mathcal{S}}$ satisfying $\rho(\xi)={\mathbb E}_{\mathbb Q}[-\xi]$, necessarily also $\rho(\xi^+)={\mathbb E}_{\mathbb Q}[-\xi^+]$ and $\rho(-\xi^-)={\mathbb E}_{\mathbb Q}[\xi^-]$. This easily follows from the subadditivity of $\rho$. \end{rem}
All notions above are standard in the theory of risk measures. We now introduce some definitions that are specific to the framework of this paper, that is the one of a group consisting of $N$ distinct units under regulatory supervision.
\begin{defi}\label{defi} We denote by ${\mathcal{X}}$ the space of $N$ dimensional random variables which are positive and bounded.
We consider a liability vector ${\mathbf X}=(X_1,\cdots,X_N)\in {\mathcal{X}}$ and consider $m\in{\mathbf R}_+$. The class of admissible payoffs from a total capital $m$, corresponding to the liability ${\mathbf X}$ is defined as:
$$
{\mathbb A}^{\mathbf X}(m)=
\left \{ {\mathbf Y}\in {\mathcal{X}} \biggm | \sum_{i=1}^NY_i=m;\; \forall k\in\{1,...,N\}:
\begin{array}{l}
\text{ if $\sum_{i=1}^N X_i>m$ then $Y_k=\frac{X_k}{\sum_{i=1}^NX_i} m$} \\
\text{ if $\sum_{i=1}^N X_i\leq m$ then $Y_k\geq X_k$}
\end{array}
\right \}.
$$
\end{defi}
Admissible payoffs respect some rules as follows. If the capital $m$ is less than the aggregate liability $\sum_i X_i$, the group defaults. In this case, all liabilities have the same priority of payment, regardless the unit to which they are corresponding. Hence, in default, all capital $m$ is distributed towards the units proportionally to their liability size. Whenever the group is solvent at the aggregate level ($\sum_i X_i\leq m$), every unit must be solvent as well, hence the central unit distributes for each unit $i$ a payment that should cover the liability $X_i$. As there is a surplus in this case, some units will get more than their liability as a payoff.
\begin{exa} Let us consider that each business unit receives some constant proportion of the surplus $(m-\sum_i X_i)^+$. The corresponding admissible payoffs (that we shall call standard payoffs) are given as follows:
\begin{align}\label{formYi}
Y_k&=\[X_k+\alpha_k\(m-\sum_i X_i\)\right]{\rm \bf 1}_{\{\sum_i X_i\leq m\}}+X_k\(\frac{m}{\sum_i X_i}\){\rm \bf 1}_{\{\sum_i X_i> m\}},\quad i=1,\ldots,N
\end{align} where each $\alpha_k$ is a nonnegative constant and $\sum_{i=1}^N\alpha_i=1$.
\end{exa}
\begin{defi}\label{defiOptPay} Given a level of capital $m$, an \textit{offsetting payoff} corresponding to the liability ${\mathbf X}\in{\mathcal{X}}$, is a vector of random variables ${\mathbf Y}\in{\mathbb A}^{\mathbf X}(m)$ satisfying \begin{equation} Y_i-X_i\in {\mathcal{A}} \text{ for all $i\in\{1,...,N\}$.} \end{equation} that is, the net worth of any unit is acceptable. \end{defi} Offsetting payoffs cannot be achieved when there is not sufficient overall capital $m$. The analysis in the next section will reveal the fact that the offsetting payoffs can never be achieved when the capital $m$ is less than $K:=\rho(-\sum_i X_i)$, that is the minimal capital required for the aggregated liability. Also, in general, holding the capital $K$ does not guarantee the existence of these payoffs, so that the group may be required to hold more capital. This justifies to introduce the following additional definition:
\begin{defi} A coherent risk measure $\rho$ is called \textit{cohesive} if for any risk vector ${\mathbf X}\in{\mathcal{X}}$ there exists ${\mathbf Y}\in{\mathbb A}^{\mathbf X}(\rho(-\sum_{i=1}^N X_i))$ such that \begin{equation} Y_i-X_i\in {\mathcal{A}} \text{ for all $i\in\{1,...,N\}$.} \end{equation} \end{defi}
\section{Properties of offsetting payoffs with minimal capital}\label{Secoffsetting} We use the setup and notation from the previous section; in particular $\rho$ is a coherent risk measure that is commonotonic, with a corresponding scenario set ${\mathcal{S}}$ assumed to be weakly compact. Also, we shall consider a fixed liability vector ${\mathbf X}\in{\mathcal{X}}$ and denote the aggregated group liability by $$ S^{\mathbf X}:=\sum_{i=1}^N X_i. $$ Also we denote $$ K:=\rho\(-\sum_{i=1}^N X_i\) $$ that is the minimum capital for the aggregated liability. Under individual regulatory constraints for the business units, the minimum regulatory capital for the group is denoted ${\mathcal{K}}({\mathbf X})$ and its expression was introduced in (\ref{ka}).
First observation is that the group needs to hold at least a capital of $K$. \begin{lemma}\label{lemi} The minimal capital for the group ${\mathcal{K}}({\mathbf X})$ satisfies ${\mathcal{K}}({\mathbf X})\geq K$. \end{lemma} \begin{proof} We assume ${\mathbf Y}\in {\mathbb A}^{\mathbf X}(m)$ offsetting, that is $Y_i-X_i\in \mathcal A$, or $\rho(Y_i-X_i)\leq 0$ for all $i\in{\mathcal{N}}$. From this and the sub-additivity of $\rho$, we get: \begin{equation}\label{eqi} 0\geq \sum_{i=1}^N \rho(Y_i-X_i)\geq \rho\left(\sum_{i=0 }^N (Y_i-X_i)\right)= \rho\(m-\sum_{i=0}^NX_i\)=K-m. \end{equation}
\end{proof} We now investigate what happens if the company holds a capital of $K$. Is it possible to split the capital $K$ in a vector of admissible payoffs ${\mathbf Y}$, so that the net worth $Y_i-X_i$ of each unit $i$ is acceptable? The existence of such payoffs is not granted. Below we show that this condition is rather restrictive and we characterise the situations where the answer to the question is positive.
A first remark is that the existence of offsetting payoffs with capital $K$ requires that no further improvement of the net worth of the business units can be reached by further diversification. This is what the next lemma says. \begin{lemma}\label{lemii} If the payoff vector ${\mathbf Y}\in {\mathbb A}^{\mathbf X}(K)$ is offsetting the liability ${\mathbf X}$, then: \begin{equation}\label{eqii} \sum_{i=0}^N\rho(Y_i-X_i)=\rho\(\sum_{i=0}^N (Y_i-X_i)\). \end{equation} \end{lemma}
\begin{proof} The proof is similar to the one of Lemma \ref{lemi}. It suffices to take $m=K$ in (\ref{eqi}), therefore we need to have only equalities. We remark that the admissibility of ${\mathbf Y}$ does play a role in establishing this result only through the condition $\sum_i Y_i=K$ \end{proof}
\begin{prop}\label{propequiv}Consider a payoff vector ${\mathbf Y}\in {\mathbb A}^{\mathbf X}(K)$. The following are equivalent: \begin{itemize} \item[(i)] The payoff vector ${\mathbf Y}$ is offsetting the liability ${\mathbf X}$. \item[(ii)] If ${\mathbb Q}^*\in\nabla S^{\mathbf X}$, then for all $i\in\{1,...,N\}$: \begin{align}\label{uY-X} \rho(Y_i-X_i)& ={\mathbb E}_{{\mathbb Q}^*} [X_i-Y_i]=0. \end{align} \item[(iii)] Relation (\ref{eqii}) holds and for all $i\in\{1,...,N\}$: \begin{align}\label{uY-Xbis} {\mathbb E}_{{\mathbb Q}^*} [X_i-Y_i]=0. \end{align} for some ${\mathbb Q}^*\in\nabla S^{\mathbf X}$.
\item[(iv)]The following hold: the minimal group capital ${\mathcal{K}}({\mathbf X})$ satisfies \begin{align*} K={\mathcal{K}}({\mathbf X}) \end{align*} and ${\mathbf Y}$ is a solution of $$ \inf_{\xi\in {\mathbb A}^{\mathbf X}(K)}\sum_{i=0}^N \rho(\xi_i-X_i). $$ \end{itemize} \end{prop} \begin{proof} (i)$\Rightarrow $(ii). We can apply Proposition \ref{proplin} below, taking $E=\{1,\ldots,N\}$ and $\xi_i:=Y_i-X_i$. Indeed, by Lemma \ref{lemii}, the condition in Proposition \ref{proplin} (1) is fulfilled and it is equivalent to (3). \\ (ii)$\Rightarrow$(i). Obvious.\\ (ii)$\Rightarrow$(iii). The relation (\ref{uY-X}) implies that relation (\ref{eqii}) holds as it can easily checked; also (\ref{uY-X}) implies (\ref{uY-Xbis}) obviously.\\ (iii)$\Rightarrow$(ii). If relation (\ref{eqii}) holds, we can apply Proposition \ref{proplin} below to deduce that $\rho(Y_i-X_i) ={\mathbb E}_{{\mathbb Q}^*} [X_i-Y_i]$, for all ${\mathbb Q}^*\in\nabla S^{\mathbf X}$ and these expressions are null, again by (iii). \\ (i)$\Leftrightarrow$(iv). In general, $\sum_{i=0}^N \rho(\xi_i-X_i)\geq \rho\(\sum_{i=0}^N(\xi_i-X_i)\)$ so that $$ \inf_{\xi\in {\mathbb A}^{\mathbf X}(K)}\sum_{i=0}^N \rho(\xi_i-X_i)\geq \rho(K-S^{\mathbf X})=0. $$ The condition $K={\mathcal{K}}({\mathbf X})$ means that there are offsetting payoff vectors with a capital $K$, while from the above inequality we see that whenever there exist such offsetting payoff vectors, they are solving the minimisation problem (as $\rho(Y_i-X_i)=0$ whenever ${\mathbf Y}$ offsetting ${\mathbf X}$). Hence the proof is complete. \end{proof}
\begin{prop}\label{proplin} We consider some random variables $(\xi_i)_{i\in E}$, with $E$ some countable set and let ${\mathbb Q}^E\in \nabla{\rho\(\sum_{i\in E} \xi_i\)}$. The following are equivalent: \begin{itemize}
\item[(1)] \begin{equation}\label{xilin0} \rho\left (\sum_{i\in E}\xi_i\right)= \sum_{i\in E}\rho(\xi_i). \end{equation} \item[(2)] For all $ \lambda_i\geq 0$, $i\in E$ \begin{align}\label{xilin} \rho\left (\sum_{i\in E} \lambda_i\xi_i\right)&=\sum_{i\in E}\lambda_i \rho\left (\xi_i\right). \end{align} \item[(3)] For all $i\in E$ \begin{align}\label{xilin2} \rho\left (\xi_i\right)&={\mathbb E}_{{\mathbb Q}^E}\left [-\xi_i\right]. \end{align} \end{itemize} \end{prop}
\begin{proof} We show $(1)\Rightarrow (3)$. For any $i\in E$, let ${\mathbb Q}^{\{i\}}\in {\mathcal{S}}$ be such that $\rho\(\xi_i\)={\mathbb E}_{{\mathbb Q}^{\{i\}}}\[-\xi_i\right].$ Then: $$ \sum_{i\in E}\rho(\xi_i)=\sum_{i\in E}{\mathbb E}_{{\mathbb Q}^{\{i\}}}\[-\xi_i\right]\geq \sum_{i\in E}{\mathbb E}_{{\mathbb Q}^E}\[-\xi_i\right]=\rho\(\sum_{i\in E} \xi_i\). $$ From (\ref{xilin0}), it follows that we must have only equalities above, hence: $$ \sum_{i\in E}\left ({\mathbb E}_{{\mathbb Q}^{\{i\}}}[-\xi_i]- {\mathbb E}_{{\mathbb Q}^E}[-\xi_i]\right )=0, $$ which implies (as all terms in the above sum are nonnegative) that $\rho\(\xi_i\)={\mathbb E}_{{\mathbb Q}^{\{i\}}}[-\xi_i] = {\mathbb E}_{{\mathbb Q}^E}[-\xi_i]$.
Now, we show $(3)\Rightarrow (2)$. Using the linearity of the expectation and (\ref{xilin2}) we obtain: \begin{equation}\label{eq1} \rho\left (\sum_{i\in E} \lambda_i\xi_i\right)\geq {\mathbb E}_{ {\mathbb Q}^E}\left [-\sum_{i\in E} \lambda_i\xi_i\right]=\sum_{i\in S} \lambda_i\rho(\xi_i). \end{equation} On the other hand, $\rho$ being convex we also have for all $\lambda_i \geq 0$: \begin{equation}\label{eq2} \rho\left (\sum_{i\in E} \lambda_i\xi_i\right)\leq \sum_{i\in E} \lambda_i\rho(\xi_i) \end{equation} Combining (\ref{eq1}) and (\ref{eq2}) we get the equality (\ref{xilin}).
The implication $(2)\Rightarrow (1)$ is trivial, hence the proof is complete.
\end{proof}
Proposition \ref{propequiv} (ii) emphasizes that a payoff vector ${\mathbf Y}$ satisfying ${\mathbb E}_{{\mathbb Q}^*}[X_i-Y_i]=0$ with ${\mathbb Q}^*\in\nabla \rho(-S^{\mathbf X})$ is a potential candidate to be an offsetting payoff (necessary condition). It follows that when payoffs are standard, i.e., as in (\ref{formYi}), the proportions $(\alpha_i)$ can be identified via these equalities. Their expressions are given below in Proposition \ref{offsettingStandardC}, together with another necessary and sufficient condition for these to be indeed offsetting vectors. To be noted that this time we will use commonotonicity of the risk measure $\rho$ to obtain this condition, the previous results remaining true also when $\rho$ not commonotonic.
\begin{prop}\label{offsettingStandardC} The minimal capital for the group ${\mathcal{K}}({\mathbf X})$ satisfies ${\mathcal{K}}({\mathbf X})= K$ if and only if \begin{equation}\label{cnsoff} \rho\(-\(K-S^{\mathbf X}\)^-\)=\sum_{i=1}^N\rho\(-\frac{X_i}{S^{\mathbf X}}(K-S^{\mathbf X})^-\). \end{equation} If this condition is satisfied, then ${\mathbf Y}$ is offsetting ${\mathbf X}$, where ${\mathbf Y}$ is a standard payoff (see (\ref{formYi})) with \begin{align*} \forall i:\quad\alpha_i:&=\frac{{\mathbb E}_{{\mathbb Q}^*}\[ \frac{X_i}{S^{\mathbf X}} (K-S^{\mathbf X})^-\right]}{{\mathbb E}_{{\mathbb Q}^*}\[ (K-S^{\mathbf X})^+\right]} =\frac{{\mathbb E}_{{\mathbb Q}^*}\[ \frac{X_i}{S^{\mathbf X}} (S^{\mathbf X}-K)^+\right]}{{\mathbb E}_{{\mathbb Q}^*}\[ (S^{\mathbf X}-K)^+\right]} \end{align*} where ${\mathbb Q}^*\in\nabla \rho(-S^{\mathbf X})$.
\end{prop} \begin{proof} By Proposition \ref{propequiv} (iii), $\widetilde {\mathbf Y}\in {\mathbb A}^{\mathbf X}(K)$ is ofsetting ${\mathbf X}$ if and only if the condition (\ref{eqii}) is satisfied together with ${\mathbb E}_{{\mathbb Q}^*}[\widetilde Y_i-X_i]=0$ for all $i$.
Using the commonotonicity of $\rho$ we obtain that an equivalent expression for (\ref{eqii}) is (applied to $\widetilde {\mathbf Y}$): \begin{align*} & \[\rho\(\(\sum_{i=1}^N\widetilde Y_i-X_i\)^+\)-\sum_{i=1}^N\rho\((\widetilde Y_i-X_i)^+\)\right]\\
&+\[\rho\(-\(\sum_{i=1}^N \widetilde Y_i-X_i\)^-\)-\sum_{i=1}^N\rho\(-(\widetilde Y_i-X_i)^-\)\right]=0 \end{align*} and because of the subadditivity property of $\rho$ each of the two expressions in the brackets is smaller or equal to 0. It follows the equivalent expression of (\ref{eqii}) is: \begin{equation}\label{cns1} \rho\(\(\sum_{i=1}^N\widetilde Y_i-X_i\)^+\)=\sum_{i=1}^N\rho\((\widetilde Y_i-X_i)^+\) \end{equation} and \begin{equation}\label{cns2} \rho\(-\(\sum_{i=1}^N \widetilde Y_i-X_i\)^-\)=\sum_{i=1}^N\rho\(-(\widetilde Y_i-X_i)^-\). \end{equation} We notice that because $\widetilde {\mathbf Y}$ is admissible, the expression (\ref{cns2}) equals (\ref{cnsoff}), so that the condition (\ref{cnsoff}) is necessary for the existence of ofsetting payoffs. We now show that it a sufficient condition. Indeed, we can always choose $\widetilde {\mathbf Y}={\mathbf Y}$ with ${\mathbf Y}$ standard and as stated in the proposition, we have that the random variables $( Y_i-X_i)^+=\alpha_i (K-S^{\mathbf X})^+$, $\forall i=1,...,N$. We then use the commonotonicity property of $\rho$ to conclude that (\ref{cns1}) is verified. The particular proportions $\alpha_i$ are found through the equality ${\mathbb E}_{{\mathbb Q}^*}[Y_i-X_i]=0$ Proposition \ref{propequiv} (ii). Hence, once (\ref{cnsoff}) is verified, the vector ${\mathbf Y}$ fulfils the necessary and sufficient conditions to be offsetting. \end{proof}
\section{A class of cohesive risk measures}
We consider a random variable $H\geq 0$ that satisfies $1\leq {\mathbb E}_{{\mathbb P}}[H]<\infty$ and introduce the risk measure: \begin{equation}\label{properu} \rho^H(\xi):=\sup_{{\mathbb Q}\in{\mathcal{S}}^H}{\mathbb E}_{{\mathbb Q}}[-\xi] \end{equation} with a scenario set: \begin{equation}\label{propers} {\mathcal{S}}^H:=\left \{{\mathbb Q}\mid 0\leq \frac{d{\mathbb Q}}{d{\mathbb P}}\leq H \quad {\mathbb P}\; a.s.\right \} \end{equation}
The main result in this subsection is that all cohesive risk measures that are commonotonic have this representation. A generalisation of these risk measures will follow afterwards. Before we prove this result, we give some alternative characterisations of $\rho^H$:
\begin{prop}\label{uH=AVar} \begin{itemize} \item[(1)]Let us denote ${\mathbb E}_{{\mathbb P}}[H]=h$ and introduce the probability measure ${\mathbb H} \ll {\mathbb P}$ as:
$$
\frac{d{\mathbb H}}{d{\mathbb P}}:=\frac{H}{h}.
$$ For any $\xi\in L^\infty$ we have:
\begin{align*} \rho^H(\xi)&=AV@R_{({\mathbb H},1/h)}(\xi), \end{align*}i.e., the average value at risk for $\xi$, at the level $1/h$ and under the probability ${\mathbb H}$. We recall $AV@R_{({\mathbb P},\lambda)}(\xi)$ is defined as: $$ AV@R_{({\mathbb P},\lambda)}(\xi)=\max_{{\mathbb Q}\in{\mathcal{S}}_\lambda}{\mathbb E}_{{\mathbb Q}}[-\xi], $$where ${\mathcal{S}}_\lambda$ is the set of all probability measures ${\mathbb Q}\ll{\mathbb P}$ whose density $d{\mathbb Q}/d{\mathbb P}$ is ${\mathbb P}$ {\frenchspacing a.s.}~ bounded by $1/\lambda$. \item[(2)]For any $\xi\in L^\infty$ we have:
\begin{equation*} \rho^H(\xi)={\mathbb E}_{{\mathbb Q}^\xi}[-\xi] \end{equation*} for a probability ${\mathbb Q}^\xi$ satisfying: \begin{equation}\label{Q0} \frac{d{\mathbb Q}^\xi}{d{\mathbb P}}= \begin{cases} H &\text{ on } \{\xi<q \}\\ \frac{c}{h}H &\text{ on } \{\xi=q \}\\ 0 &\text{ on } \{\xi > q\}, \end{cases} \end{equation} with $q=\inf\{x\;:\; {\mathbb E}_{\mathbb P}\[H{\rm \bf 1}_{\xi\leq x}\right]\geq 1\}$
and: \begin{equation*} c= \begin{cases} 0 &\text{ if }\text{ } {\mathbb E}_{\mathbb P}[H{\rm \bf 1}_{\xi=q}]=0\\ \frac{1-{\mathbb E}_{\mathbb P}[H{\rm \bf 1}_{\xi<q}]}{ {\mathbb E}_{\mathbb P}[H{\rm \bf 1}_{\xi=q}]} &\text{ otherwise. } \end{cases} \end{equation*}
\end{itemize} \end{prop}
\begin{proof} \begin{align*}
\rho^H(\xi)&=\sup \left\{-{\mathbb E}_{\mathbb P}[\varphi \xi]\;|\; 0\leq \varphi\leq H\;,\; {\mathbb E}_{\mathbb P}[\varphi]=1\right\} \\
&=\sup\left\{-{\mathbb E}_{{\mathbb H}}[\psi \xi ]\;|\; 0\leq \psi\leq h\;,\; {\mathbb E}_{{\mathbb H}}[\psi]=1\right\}\\ &=AV@R_{({\mathbb H},1/h)}(\xi). \end{align*} and we proved (1). By applying the Neyman-Pearson lemma, one can show (2). For more details, see also next section, where a generalisation appears, or Subsection 4.4 in \cite{FDbook}. \end{proof} Because $AV@R$ is commonotonic it follows that \begin{cor}\label{corcomp} The risk measure $\rho^H$ is commonotonic. \end{cor}
We are now ready to prove the main result of this section.
\begin{prop}\label{PropuH} A commonotonic risk measure $\rho$ satisfying the weak compactness property is cohesive if and only if it has the representation (\ref{properu}) for some random variable $H\in L^1({\mathbb P})$. \end{prop}
\begin{proof} We first proof that $\rho^H$ is cohesive. The risk measure $\rho^H$ is commonotonic (Corrollary \ref{corcomp}) and clearly satisfies the weak compactness property. It is cohesive if for any given liability vector ${\mathbf X}$ offsetting payoffs exist. For ${\mathbf X}\in{\mathcal{X}}$ and $S^{\mathbf X}=\sum_iX_i$, let ${\mathbb Q}^*$ be such that $K=\rho^H(-S^{\mathbf X})={\mathbb E}_{{\mathbb Q}^*}[S^{\mathbf X}]$. It is sufficient to show that for arbitrary ${\mathbf X}\in{\mathcal{X}}$ and for ${\mathbf Y}$ as in Proposition \ref{offsettingStandardC}, the conditions $Y_i-X_i\in {\mathcal{A}}$, $\forall\;i\in\{1,...,N\}$ are satisfied. By construction the condition ${\mathbb E}_{{\mathbb Q}^*}(X_i-Y_i)=0$ is satisfied for all $i\geq 1$ and it remains to prove: \begin{equation}\label{uH} \rho^H(Y_i-X_i)={\mathbb E}_{{\mathbb Q}^*}(X_i-Y_i). \end{equation} We recall that ${\mathbb Q}^*$ is defined as in (\ref{Q0}) where $h={\mathbb E}_{\mathbb P}[H]$. With a slight change in notation, let $q$ be such that ${\mathbb P}[S^{\mathbf X}\ge q]\ge 1/h\ge {\mathbb P}[S^{\mathbf X} > q]$. Then ${\mathbb Q}^*[ \{S^{\mathbf X}<q\}]=0$ and ${{\mathbb Q}^*}[S^{\mathbf X}>K]={\mathbb E}_{{\mathbb P}}\[H{\rm \bf 1}_{\{S^{\mathbf X}>K\}}\right]$ (since obviously $K\ge q$).
For $i\geq 1$ and ${\mathbb Q}\in{\mathcal{S}}$ we have: \begin{align*} {\mathbb E}_{{\mathbb Q}}[X_i-Y_i]&=-\int_{S^{\mathbf X}<K} \alpha_i(K-S^{\mathbf X})d{\mathbb Q} +\int_{S^{\mathbf X}>K} \frac{X_i}{S^{\mathbf X}}(S^{\mathbf X}-K)\left(\frac{d{\mathbb Q}}{d{\mathbb P}}\right)d{\mathbb P} \\ &\leq \alpha_i\rho(-(K-S^{\mathbf X})^+) +\int_{S^{\mathbf X}>K} \frac{X_i}{S^{\mathbf X}}(S^{\mathbf X}-K)Hd{\mathbb P}\\ &\leq \alpha_i{\mathbb E}_{{\mathbb Q}^*}[(K-S^{\mathbf X})^+] +\int_{S^{\mathbf X}>K} \frac{X_i}{S^{\mathbf X}}(S^{\mathbf X}-K)Hd{\mathbb P}\\ &= {\mathbb E}_{{\mathbb Q}^*}[X_i-Y_i] \end{align*} We have used the fact that $\rho(-(K-S^{\mathbf X})^+)={\mathbb E}_{{\mathbb Q}^*}[(K-S^{\mathbf X})^+]$, which is a consequence of the commonotonicity of $\rho^H$ (Corollary \ref{corcomp}). From the above inequality we obtain (\ref{uH}), for $i\in\{1,...,N\}$.
\noindent We now take $\rho$ cohesive, commonotonic, ${\mathcal{S}}$ weakly compact and prove that $\rho=\rho^H$ for some $H\in L^1$, that is, there is a random variable $H$ so that for any random variable $\xi\in L^\infty$, we have:
\begin{equation}\label{whattoprove} \rho(\xi)={\mathbb E}_{{\mathbb Q}^\xi}[-\xi] \end{equation} where ${\mathbb Q}^\xi$ is given in (\ref{Q0}).
Because $\rho$ is commonotonic, its values are determined by the expression for sets. This brings us to the following reduction: find a random variable $0\le H\in L^1$ such that for all $B\in {\mathcal{F}}$ \begin{equation}\label{whattoprove1} \rho(-{\rm \bf 1}_B)={\mathbb E}_{\mathbb P}\[H{\rm \bf 1}_B\right]\wedge 1=\rho^H(-{\rm \bf 1}_B). \end{equation} The natural candidate is of course $H=\sup_{{\mathbb Q}\in{\mathcal{S}}}\frac{d{\mathbb Q}}{d{\mathbb P}}$ and we will show that this function indeed works. As a result we then get ${\mathcal{S}}={\mathcal{S}}^H$. The $\sup$ has to be understood in the measure theoretic sense and the reader who is not familiar with this concept can find the information in a course on measure theory. However the proof below gives sufficient information to overcome the difficulties.
Let us first take $A\in{\mathcal{F}}$ such \begin{equation}\label{r1} \rho(-{\rm \bf 1}_A)\in(0,1) \end{equation} We consider a partition of $A$, $\tau(A)=\{A_1,A_2\}\subset {\mathcal{F}}$, the risks $X_1:={\rm \bf 1}_{A_1}$, $X_2:={\rm \bf 1}_{A_2}$ and their sum $S^{\mathbf X} =X_1+X_2={\rm \bf 1}_{A}$. We let $$ K:=\rho\(-\sum_i X_i\)=\rho\(-{\rm \bf 1}_A\), $$
and $K\in(0,1)$ due to (\ref{r1}). This in turn leads to $A=\{S^{\mathbf X}>K\}$ and $A^c=\{S^{\mathbf X}<K\}$. There is ${\mathbb Q}^A\in\nabla\rho (-{\rm \bf 1}_A)$ with $\rho(-{\rm \bf 1}_A)={\mathbb Q}^A[A]$. Hence $\rho({\rm \bf 1}_{A^c})=\rho(1-{\rm \bf 1}_A)=-1+\rho(-{\rm \bf 1}_A)=-{\mathbb Q}^A[A^c]$. As $\rho$ is cohesive, there exist offsetting payoffs for the risk ${\mathbf X}=(X_1,X_2,0,..,0)$. We now consider ${\mathbf Y}\in{\mathbb A}^{\mathbf X}(K)$ standard payoffs offsetting ${\mathbf X}$, with $Y_i=0$ for $i>2$. The residual risks of the units $i=1,2$ are given by $$ X_i-Y_i= -\alpha_iK{\rm \bf 1}_{A^c}+(1-K){\rm \bf 1}_{A_i}. $$ By commonotonicity of $\rho$ and of the random variables $(Y_i-X_i)^+$ and $-(Y_i-X_i)^-$ the following hold (for $i=1,2$): \begin{align*} \rho(Y_i-X_i)&=\rho\((Y_i-X_i)^+\) +\rho\(-(Y_i-X_i)^-\)\\ &= \alpha_iK \rho({\rm \bf 1}_{A^c})+(1-K)\rho(-{\rm \bf 1}_{A_i})\\ &= -\alpha_iK{\mathbb Q}^A[A^c] +(1-K)\rho(-{\rm \bf 1}_{A_i}) \end{align*} while by Proposition \ref{propequiv} we also know that for ${\mathbb Q}^A\in\nabla\rho (-{\rm \bf 1}_A)$: \begin{align}\label{eqQ1i} \rho(Y_i-X_i)=-\alpha_i K{\mathbb Q}^A(A^c)+(1-K){\mathbb Q}^A(A_i)=0. \end{align} Therefore (remember that $0<K<1$), $$ \rho\(-{\rm \bf 1}_{A_i}\)={\mathbb Q}^A(A_i),\;\forall A_i\in\tau(A). $$ We emphasize that the probability ${\mathbb Q}^A$ is chosen independently of the partition of the set $A$, $\tau(A)$. That means: if ${\mathbb Q}^A\in \nabla \rho\(-{\rm \bf 1}_A\)$ then: \begin{align}\label{qa} \rho(-{\rm \bf 1}_{B})&={\mathbb Q}^A(B),\;\forall B\subset A.
\end{align} From (\ref{qa}) we deduce that for all probabilities measures ${\mathbb Q}\in{\mathcal{S}}$ $$ \frac{d{\mathbb Q}^A}{d{\mathbb P}}{\rm \bf 1}_A\geq \frac{d{\mathbb Q}}{d{\mathbb P}}{\rm \bf 1}_A\quad {\mathbb P}\text{ a.s.}, $$ that is, $\frac{d{\mathbb Q}^A}{d{\mathbb P}}{\rm \bf 1}_A\ge H{\rm \bf 1}_A$, and because ${\mathbb Q}^A\in{\mathcal{S}}$ we must get equality. That means that if (\ref{r1}) holds we have $\frac{d{\mathbb Q}^A}{d{\mathbb P}}=H$ on the set $A$. In other words $$ \rho(-{\rm \bf 1}_A)={\mathbb E}[H{\rm \bf 1}_A]. $$ For general sets $B\in{\mathcal{F}}$, we distinguish between several cases. \begin{itemize} \item[(a)]If $B\in{\mathcal{F}}$ is such that $\rho(-{\rm \bf 1}_B)\in(0,1)$, then $\rho(-{\rm \bf 1}_B)={\mathbb E}[H{\rm \bf 1}_{B}]$, as was proved above. \item[(b)] If $B\in {\mathcal{F}}$ satisfies $\rho(-{\rm \bf 1}_B)=0$, then, $B$ is a null set for all probabilities in ${\mathcal{S}}$, consequently $H{\rm \bf 1}_B=0\;{\mathbb P} \;{\frenchspacing a.s.}~$ and in a trivial way $\rho(-{\rm \bf 1}_B)={\mathbb E}_{\mathbb P}[H{\rm \bf 1}_B]$. \item[(c)]If $B\in {\mathcal{F}}$ satisfies $\rho(-{\rm \bf 1}_B)=1$ we will use the weak compactness and the property that the probability space is atomless. There is a nondecreasing family of sets $A_t;0\le t\le 1$ such that ${\mathbb P}[A_t]=t{\mathbb P}[B]$ and $A_1=B$. The Lebesgue property (weak compactness) then shows that the function $t\rightarrow \rho(-{\rm \bf 1}_{A_t})$ is continuous. It is nondecreasing, starts at $0$ and ends at $1$. Therefore there is a unique number $s\le1$ such that for $t<s$, $\rho(-{\rm \bf 1}_{A_t})<1$ and for $t\ge s$, $\rho(-{\rm \bf 1}_{A_t})=1$. For $t<s$ we have $\rho(-{\rm \bf 1}_{A_t})={\mathbb E}[H{\rm \bf 1}_{A_t}]$ and by continuity we get $1=\rho(-{\rm \bf 1}_{A_s})={\mathbb E}[H{\rm \bf 1}_{A_s}]$. Since $A_s\subset B$ we have $1=\rho(-{\rm \bf 1}_B)={\mathbb E}[H{\rm \bf 1}_B]\wedge 1$. \end{itemize} We thus have verified (\ref{whattoprove1}). We must still show that $H\in L^1$. This is rather obvious since $\rho(-{\rm \bf 1}_{\{H>n\}})={\mathbb E}[H{\rm \bf 1}_{\{H>n\}}]\wedge 1$ and the left side tends to $0$ as $n\rightarrow \infty$. Hence eventually ${\mathbb E}[H{\rm \bf 1}_{\{H>n\}}]<1<\infty$ and $H\in L^1$. \end{proof} \begin{cor} Suppose that the risk measure $\rho$ is commonotonic, satisfies the weak compactness property, is cohesive and law determined (rearrangement invariant). Then $\rho$ is a tail expectation, i.e. there is a level $\alpha$ with $\rho=AV@R_{({\mathbb P},\alpha)}$. \end{cor} Indeed if $\rho(\xi)$ is determined by the distribution of $\xi$, then the set ${\mathcal{S}}$ must be rearrangement invariant, i.e. if $f\in {\mathcal{S}}$ and $g$ has the same distribution as $f$, then also $g\in {\mathcal{S}}$. It is easy to see (for an atomless space) that this implies that the function $H$ constructed above, must be a constant. This is nothing else than the characterisation of AV@R. \begin{rem} For a scenario set ${\mathcal{S}}$ we can introduce a space of random variables. We define $$
E=\{\xi\mid \text{ for all }{\mathbb Q}\in{\mathcal{S}}: {\mathbb E}_{\mathbb Q}[|\xi|]<\infty\}. $$
For elements $\xi\in E$ we have that also $\rho(-|\xi|)=\sup_{{\mathbb Q}\in{\mathcal{S}}}{\mathbb E}_{\mathbb Q}[|\xi|]<\infty$ and this expression defines a norm for which $E$ becomes a Banach space. Examples show that in general $L^\infty$ is not dense in this space. The risk measure $\rho$ has a natural extension to $E$. Indeed we can define $\rho(\xi)= \sup_{{\mathbb Q}\in{\mathcal{S}}}{\mathbb E}_{\mathbb Q}[-\xi]$. In case $0<H\in L^1({\mathbb P}), {\mathbb E}[H]\ge 1$, the scenario set ${\mathcal{S}}^H$ also defines such a space which in this case is easy to describe. As for tail expectation we have the following inequalities $$
\int |\xi| \, d{\mathbb H}\le \rho(-|\xi|)\le h\int |\xi| \, d{\mathbb H}\text{ where }h={\mathbb E}[H]\text{ and }d{\mathbb H}=\frac{H}{h}\,d{\mathbb P}. $$\ It follows that the space $E$ is nothing else but the space $L^1({\mathbb H})$, with an equivalent norm. \end{rem} \section{Cohesion for fixed aggregated liability}
Above, we analysed group cohesion when the class of possible liability vectors is ${\mathcal{X}}$. We now suppose that the overall (or aggregated) group liability is fixed and equals $Z\in L^\infty$ so that the class of all possible liability vectors becomes: $\{{\mathbf X}\in{\mathcal{X}}\;|\; \sum_{i=1}^N X_i=Z\}$. We generalise the class of risk measures from the previous section as follows. We define: \begin{equation}\label{uLH} \rho^{L,H}(\xi) :=\sup_{{\mathbb Q}\in{\mathcal{S}}^{L,H}}{\mathbb E}_{{\mathbb Q}}[-\xi], \end{equation} with a scenario set: \begin{equation}\label{propers} {\mathcal{S}}^{L,H}:=\left \{{\mathbb Q}: L\leq \frac{d{\mathbb Q}}{d{\mathbb P}}\leq H \quad {\mathbb P}\; a.s.\right \}, \end{equation} and where $L,H$ are nonnegative random variables that satisfy $0\leq L\leq H$, with ${\mathbb E}[H]<\infty$ and ${\mathbb E}[L]>0$. From the calculations below it will turn out that this risk measure is also commonotonic. We shall denote $\ell:={\mathbb E}[L]$ and $h:={\mathbb E}[H]$. We suppose $\ell<1$ and $h>1$ in order to exclude the trivial case ${\mathcal{S}}^{L,H}={\mathbb P}$. We introduce the following probability measures: \begin{equation} \frac{d{\mathbb H}}{d{\mathbb P}}:=\frac{H-L}{h-\ell}\text{ and } \frac{d{\mathbb L}}{d{\mathbb P}}:=\frac{L}{\ell}{\rm \bf 1}_{\{\ell>0\}}+{\rm \bf 1}_{\{\ell=0\}}. \end{equation}
For a given random variable $\xi$, we define a probability measure ${\mathbb Q}^\xi$ as follows: \begin{equation}\label{Qxi} \frac{d{\mathbb Q}^\xi}{d{\mathbb P}}:= \begin{cases} H &\text{ on } \{\xi < q(\xi) \}\\ c(\xi) H+(1-c(\xi))L &\text{ on } \{\xi =q(\xi) \}\\ L &\text{ on } \{\xi >q(\xi)\} \end{cases} \end{equation} with $q(\xi)$ and $c(\xi)$ being constants (derived from the distribution of $\xi$) to ensure that ${\mathbb E}_{{\mathbb P}}\[\frac{d{\mathbb Q}^\xi}{d{\mathbb P}}\right]=1$. These constants can be computed as follows. Let us denote: $$ F(\xi,x):={\mathbb E}_{{\mathbb P}}\[H{\rm \bf 1}_{\xi\leq x}+L{\rm \bf 1}_{\xi> x}\right]=\ell+{\mathbb E}_{\mathbb P}\[(H-L){\rm \bf 1}_{\xi\leq x}\right]. $$ The function $F(\xi,\cdot)$ is increasing, right continuous, and satisfies $\lim_{x\to-\infty} F(\xi,x)= \ell\geq 0$ and $\lim_{x\to\infty} F(\xi,x)= h \geq 1$. We denote: \begin{equation}\label{qxi} q(\xi):=\inf\{x: F(\xi,x)\geq 1\} \end{equation} and define $c(\xi)$ as satisfying $c(\xi) F(q(\xi))+(1-c(\xi))F(q(\xi)-)=1$, that is: \begin{equation}\label{cxi} c(\xi)= \begin{cases} 0 & \text{if $F$ is continuous at $q(\xi)$} \\ \frac{1-F(q(\xi)-)}{F(q(\xi))-F(q(\xi)-)}&\text{otherwise.} \end{cases} \end{equation} We observe that indeed ${\mathbb E}_{{\mathbb P}}\[\frac{d{\mathbb Q}^\xi}{d{\mathbb P}}\right]=c(\xi) F(q(\xi))+(1-c(\xi))F(q(\xi)-) =1$ as required for ${\mathbb Q}^\xi$ to be a probability measure.
\begin{prop}
For any $\xi\in L^\infty$ we have the following alternative representations for $\rho^{L,H}$:
\begin{align*} \rho^{L,H}(\xi)&=\ell\, {\mathbb E}_{\mathbb L}[-\xi]+(1-\ell) AV@R_{({\mathbb H},\gamma)}(\xi). \end{align*} for $\gamma=(1-\ell)/(h-\ell))<1$ and $$ \rho^{L,H}(\xi)= {\mathbb E}_{{\mathbb Q}^\xi}[-\xi], $$ where ${\mathbb Q}^\xi$ defined in (\ref{Qxi}).
\end{prop}
\begin{proof}We have by definition: \begin{equation}
\rho^{L,H}(\xi)=\sup \left\{-{\mathbb E}_{\mathbb P}[\varphi \xi ]\;|\; L\leq \varphi\leq H\;,\; {\mathbb E}_{\mathbb P}[\varphi]=1\right\} \end{equation}
Using the transformation $\psi:=\frac{(\varphi -L)(h-\ell)}{(H-L)(1-\ell)}$ we obtain that $\{ L\leq \varphi\leq H\;,\; {\mathbb E}_{\mathbb P}[\varphi]=1\}= \{ 0\leq \psi\leq \frac{h-\ell}{1-\ell}\;,\; {\mathbb E}_{{\mathbb H}}[\psi]=1\} $ and:
\begin{align*} {\mathbb E}_{\mathbb P}[\varphi \xi ]&={\mathbb E}_{\mathbb P}[L \xi ]+{\mathbb E}_{\mathbb P}[(\varphi-L) \xi ]\\ &=\ell\,{\mathbb E}_{\mathbb P}\[ \xi\frac{d{\mathbb L}}{d{\mathbb P}} \right]+(1-\ell){\mathbb E}_{{\mathbb P}}\[\psi \xi \frac{d{\mathbb H}}{d{\mathbb P}}\right]\\ &=\ell \,{\mathbb E}_{\mathbb L} [\xi ]+(1-\ell){\mathbb E}_{{\mathbb H}}[\psi \xi ]. \end{align*}
Therefore, an equivalent expression for $\rho^{L,H}$ is: \begin{align*} \rho^{L,H}(\xi)
&=\ell \,{\mathbb E}_{\mathbb L}[-\xi]+ (1-\ell) \sup \left \{-{\mathbb E}_{{\mathbb H}}[\varphi \xi ]\;|\; 0\leq \varphi\leq \frac{h-\ell}{1-\ell}\;,\; {\mathbb E}_{{\mathbb H}}[\varphi ]=1 \right \}\\ &=\ell \,{\mathbb E}_{\mathbb L}[\xi]+(1-\ell) AV@R_{({\mathbb H},\gamma)}(\xi), \end{align*}where $AV@R_{({\mathbb H},\gamma)}(-\xi)$ is the average value at risk for $-\xi$, at the level $\gamma=(1-\ell)/(h-\ell))$ and under the probability ${\mathbb H}$.We notice that $\gamma<1$ ads $h>1$. The optimiser probability for $AV@R$ is known to be ${\mathbb Q}^\xi$: $$ \frac{d{\mathbb Q}^\xi}{d{\mathbb H}} =\frac{1}{\gamma}\({\rm \bf 1}_{\xi<q}+c{\rm \bf 1}_{\xi=q}\), $$ with $q$ a $\gamma$ quantile of $\xi$ under ${\mathbb H}$ and $c=0$ if ${\mathbb H}(\xi=q)=0$ and $c=\(\gamma-{\mathbb H}(\xi<q)\)/{\mathbb H}(\xi=q)$ otherwise. Writing $\frac{d{\mathbb Q}^\xi}{d{\mathbb H}}\times \frac{d{\mathbb H}}{d{\mathbb P}} $ we obtain the expression (\ref{Qxi}) with the constants $q=q(\xi)$ and $c= c(\xi)$.
\end{proof} \begin{prop} Consider the regulator's risk measure is $\rho= \rho^{L,H}$. Let $Z\in L^\infty_+$ with $q(-Z)$ the corresponding constant, as defined in (\ref{qxi}). Suppose further that $$ \rho^{L,H}(-Z)\geq -q(-Z), $$ (for this inequality to hold it is sufficient that $AV@R_{({\mathbb H},\gamma)}(-Z)\leq {\mathbb E}_{\mathbb L}[Z]$). Then, for all risk vectors ${\mathbf X}$ satisfying $\sum_{i=1}^N X_i=Z$, the following equality is satisfied $$ {\mathcal{K}}({\mathbf X})=\rho(-Z). $$ \end{prop} \begin{proof}Let us denote $K=\rho^{L,H}(-Z)$ and ${\mathbb Q}^*$ be the probability in (\ref{Qxi}) with $\xi=-Z$. If $K\geq -q(-Z)$, then $\{Z>K\}\subset\{Z> -q(-Z)\}=\{-Z< q(-Z)\}=\{\frac{d{\mathbb Q}^*}{d{\mathbb P}}=H\}$.
For any ${\mathbf X}\in{\mathcal{X}}$ satisfying $\sum_{i=1}^N X_i=Z$ and for any corresponding vector ${\mathbf Y}\in{\mathbb A}^{\mathbf X}(K)$, we have: $\forall i$ $\{X_i>Y_i\}\subset\{Z>K\}$. Therefore, we obtain: $$ \rho(-(Y_i-X_i)^-)= {\mathbb E}_{{\mathbb Q}^*}\[-(Y_i-X_i)^-\right]={\mathbb E}_{\mathbb P}\[-(Y_i-X_i)^-H\right]. $$ It follows that the condition (\ref{cnsoff}) is satisfied and, as $\rho^{L,H}$ is commonotonic, the equality ${\mathcal{K}}({\mathbf X})=\rho(-Z)$ holds as an application of Proposition \ref{offsettingStandardC}.
It remains to prove the claim that if $AV@R_{({\mathbb H},\gamma)}(-Z)\leq {\mathbb E}_{\mathbb L}[Z]$ then $\rho^{L,H}(-Z)\geq -q(-Z)$. We have that $-q(-Z)\leq AV@R_{{\mathbb H},\gamma}(-Z)$ (this is always true). Then, the condition\\ $AV@R_{({\mathbb H},\gamma)}(-Z)\leq {\mathbb E}_{\mathbb L}[Z]$ implies $AV@R_{({\mathbb H},\gamma)}(-Z) \leq \rho(-Z)$, and hence the claim is proved.
\end{proof}
\end{document}
\begin{thebibliography}{100}
\bibitem{AAJ} Amir Ahmadi-Javid: Entropic Value-at-Risk: A New Coherent Risk Measure, {\em Journ. of Optim. Theory and Appl.}, {\bf 155}, 1105--1123, (2012)
\bibitem{AAJ_ad} Amir Ahmadi-Javid: Addendum to Entropic Value-at-Risk: A New Coherent Risk Measure, {\em Journ. of Optim. Theory and Appl.}, {\bf 155}, 1124--1128, (2012)
\bibitem{Del} Delbaen, F.: Coherent Risk Measures on General Probability Spaces {\em in Advances in Finance and Stochastics}, pp. 1--37, Springer, Berlin (2002
\bibitem{kusuoka} Kusuoka, S.: On Law Invariant Coherent Risk Measures, {\em Advances in Mathematical Economics}, {\bf 3}, 83--95, (2001)
\bibitem{Ryff} Ryff, J. V.: Extreme Points of some Convex Subsets of $L^1(0,1)$. {\em Proc. Amer. Math. Soc.}, {\bf 18}, 1026--1034, (1967)
\bibitem{Schm} Schmeidler, D.: Integral Representation without Additivity, {\em Proc. Amer. Math. Soc.}, {\bf 97}, 255--261, (1986)
\end{document} | arXiv |
\begin{document}
\title[]{New Trigonometric form of The Hamilton's Quaternions} \author{Mijail Andr'es Saralain Figueredo} \address{Facultad de Matem$\acute{a}$tica, F$\acute{i}$sica y Computaci$\acute{o}$n, Universidad Central Marta Abreu de Las Villas,Apartado Postal 54830,Santa Clara ,Villa Clara,Cuba} \email{[email protected]} \thanks{[email protected]} \date{\today}
\begin{abstract} Is it possible to define, for certain values $n$ the product of vectors of the real vector space of n dimensions , such that this is, with respect to multiplication and the ordinary addition of vectors, a numerical system which contains the system of real numbers? It can be proven that this cannot be done. In the space of four dimensions this construction is possible if we are apart from the commutativity of the multiplication. The resulting system is the one of \textbf{QUATERNIONS}. In this work I first do a reminder of the fundamental concepts of Hamilton's Hypercomplex and then a deep work with such concepts. \end{abstract}
\maketitle
\section{Define quaternion}
\begin{defn} We shall call quaternions or simply Hamilton's hypercomplexes to an expression in the form: \begin{equation*} Q=a+bi+cj+dk \end{equation*} where: $a,b,c,d\in \mathbb{R}$.Besides $i,j,k$ are imaginary units, pairwise solutions of the equation $x^{2}=-1$, satisfying: \begin{equation*} ij=k=-ji \end{equation*} \begin{equation*} jk=i=-kj \end{equation*} \begin{equation*} ki=j=-ik \end{equation*} \begin{equation*} ijk=-1 \end{equation*} \begin{equation*} i^{2}=j^{2}=k^{2}=-1. \end{equation*} \end{defn}
\begin{defn} We shall say a quaternion is purely imaginary, if the first element of the expression is equal to zero $(a=0,Im(\mathbb{H}))$. \end{defn}
\begin{thm} We say $Q=Q^{\prime }$, with $Q^{\prime }=a^{\prime }+b^{\prime }i+c^{\prime }j+d^{\prime }k$ $(Q,Q^{\prime }\in \mathbb{H})$, ie two quaternions are equal, if and only if, are equivalent the components of their imaginary and real parts: $a=a^{\prime },b=b^{\prime },c=c^{\prime },d=d^{\prime }.$ \emph{ \ } \end{thm}
Proof. It's a direct consequence of the equality in $\mathbb{R}^{n}.$
\subsection{Fundamental Definition}
\begin{itemize} \item The sum and substraction are defined component by component, ie: \begin{equation} Q+Q^{\prime }=(a+a^{\prime })+(b+b^{\prime })i+(c+c^{\prime })j+(d+d^{\prime })k \end{equation} \begin{equation} Q-Q^{\prime }=(a-a^{\prime })+(b-b^{\prime })i+(c-c^{\prime })j+(d-d^{\prime })k. \end{equation} \end{itemize}
\begin{itemize} \item The product is defined in the way: \begin{equation} Q^{\prime \prime }=(a+bi+cj+dk)(a^{\prime }+b^{\prime }i+c^{\prime }j+d^{\prime }k) \end{equation} resulting:
$Q^{\prime\prime}=a a^{\prime}-b b^{\prime}- c c^{\prime}- d d^{\prime}+(a b^{\prime}+a^{\prime}b + c d^{\prime}- c^{\prime}d) i + (a c^{\prime}+ a^{\prime}c -b d^{\prime}+ b^{\prime}d) j + (a d^{\prime}+ a^{\prime}d + b c^{\prime}- b^{\prime}c) k$, with: \begin{equation*} a^{\prime\prime}= a a^{\prime}- b b^{\prime}- c c^{\prime}- d d^{\prime} \end{equation*} \begin{equation*} b^{\prime\prime}= a b^{\prime}+ a^{\prime}b + c d^{\prime}- c^{\prime}d \end{equation*} \begin{equation*} c^{\prime\prime}= a c^{\prime}+ a^{\prime}c - b d^{\prime}+ b^{\prime}d \end{equation*} \begin{equation*} d^{\prime\prime}= a d^{\prime}+ a^{\prime}d + b c^{\prime}-b^{\prime}c \end{equation*} \begin{eqnarray} Q^{\prime\prime}= a^{\prime\prime}+ b^{\prime\prime}i + c^{\prime\prime}j + d^{\prime\prime}k. \end{eqnarray} \end{itemize}
\subsection{Fundamental Properties}
\begin{itemize} \item Commutativity of the sum: $Q_{1}+Q_{2}=Q_{2}+Q_{1}$.
\item Associativity of the sum: $(Q_{1}+Q_{2})+Q_{3}=Q_{1}+(Q_{2}+Q_{3})$.
\item Associativity of the product: $(Q_{1}\cdot Q_{2})\cdot Q_{3}=Q_{1}\cdot (Q_{2}\cdot Q_{3})$.
\item Distributivity: $(Q_{1}+Q_{2})\cdot Q_{3}=(Q_{1}+Q_{3})(Q_{2}+Q_{3})$. \end{itemize}
\begin{rem} Among the upper properties the following is missing: \begin{equation} Q_{1}Q_{2}\neq Q_{2}Q_{1}(in general). \end{equation} A very important result is Hamilton's hypercomplex are not a commutative field. \end{rem}
\begin{defn} We shall call conjugate of a quaternion $Q$, and denote $\overline{Q}$, the number: \begin{equation} \overline{Q}=a-bi-cj-dk. \end{equation} \end{defn}
Let us write now the sum and the difference of a quaternion with its conjugate:
\begin{itemize} \item Sum: \begin{equation*} Q+\overline{Q}=(a+bi+cj+dk)+(a-bi-cj-dk)=2a=2\otimes \mathbb{R}e(\mathbb{H}). \end{equation*}
\item Difference: \begin{equation*} Q-\overline{Q}=(a+bi+cj+dk)-(a-bi-cj-dk)=2(bi+cj+dk)=2\otimes \mathbb{I}m( \mathbb{H}). \end{equation*} \end{itemize}
\begin{itemize} \item Product by its conjugate: \begin{equation} Q\cdot \overline{Q}=(a+bi+cj+dk)(a-bi-cj-dk)=a^{2}+b^{2}+c^{2}+d^{2}. \end{equation} \end{itemize}
\subsubsection{Other properties}
\begin{itemize} \item Selfpowered: \begin{equation*} \overline{\overline{Q}}=Q \end{equation*}
\item Additivity: \begin{equation*} \overline{Q_{1}+Q_{2}}=\overline{Q_{1}}+\overline{Q_{2}} \end{equation*}
\item Multiplicativity: \begin{equation*} \overline{Q_{1}\cdot Q_{2}}=\overline{Q_{2}}\cdot \overline{Q_{1}} \end{equation*}
\item Divisibility: \begin{equation*} \overline{(\frac{Q_{1}}{Q_{2}})}=\frac{\overline{Q_{1}}}{\overline{Q_{2}}}. \end{equation*} \end{itemize}
Is it possible to establish the inverse for the sum and the multiplication?
\begin{defn} From $Q_{1}+Q_{2}=0$ we have: $a+a^{\prime }=0$, $b+b^{\prime }=0$, $ c+c^{\prime }=0$, $d+d^{\prime }=0$. This implies $a^{\prime }=-a$, $ b^{\prime }=-b$, $c^{\prime }=-c$, $d^{\prime }=-d$ . Therefor $ Q_{2}=-a-bi-cj-dk$ . $Q_{2}=-Q_{1}.$ \end{defn}
Analogously,
\begin{defn} If $Q_{1}\cdot Q_{2}=1$ with $Q_{1}\neq 0$ then $Q_{2}$ will be the multiplicative inverse for $Q_{1}$. \begin{equation} Q_{2}=\frac{a-bi-cj-dk}{a^{2}+b^{2}+c^{2}+d^{2}} \end{equation} with $Q_{1}=a+bi+cj+dk$. \end{defn}
Now, we've already defined the conjugate and multiplicative inverse. Let's define division.
\begin{defn} Division is define like this: \begin{equation} \frac{Q_{1}}{Q_{2}}=Q_{1}\cdot Q_{2}^{-1} \end{equation} \end{defn}
\begin{rem} Notice when dividing we multiply the numerator by the multiplicative of the denominator(recall (1.5)) \end{rem}
\section{Absolute Value}
\begin{defn}
We shall call absolute value or modulus of the quaternion, the nonnegative real number $\left| Q\right| $, \begin{equation*}
|Q|=|a+bi+cj+dk|=\sqrt{a^{2}+b^{2}+c^{2}+d^{2}}. \end{equation*} \end{defn}
Evidently, if we want to find the modulus of any quaternion: $Q+Q^{\prime },Q^{\prime \prime },Q-Q^{\prime }$ ,etc; this will be the square root of the sum of the squares of the real elements of each imaginary unit. Let us notice that: \begin{equation*}
|Q|^{2}=Q\cdot \overline{Q}. \end{equation*} As seen in formula $(1.7)$.
Thus, $|Q|^{2}=a^{2}+b^{2}+c^{2}+d^{2}$.
Now with $Q=a+bi+cj+dk$ and $\overline{Q}=a-bi-cj-dk$ (finding the conjugate of the earlier):
\begin{itemize}
\item $|\overline{Q}|=|Q|$
\item $|Q_{1}\cdot Q_{2}|=|Q_{1}||Q_{2}|$
\item ${|Q_{1}Q_{2}|}^{2}={|Q_{1}|}^{2}\cdot {|Q_{2}|}^{2}.$ \end{itemize}
\subsection{Norm}
\begin{defn} The norm will be defined like this: \begin{equation}
\|Q\|^{2}=Q\cdot \overline{Q}=|Q|^{2} \end{equation} \end{defn}
Now with $Q=a+bi+cj+dk$ and $\overline{Q}=a-bi-cj-dk$ (finding the conjugate of the earlier):
\begin{itemize}
\item $\|\overline{Q}\|=\|Q\|$
\item $\|Q_{1}\cdot Q_{2}\|=\|Q_{1}\|\|Q_{2}\|^{2}$
\item ${\|Q_{1}Q_{2}\|}^{2}={\|Q_{1}\|}^{2}\cdot {\|Q_{2}\|}.$ \end{itemize}
It is simple consequence of the modulus.
\begin{rem} We defined division previously. Now we state the following equality: \begin{equation*}
\frac{Q_{1}}{Q_{2}}=\frac{Q_{1}\cdot \overline{Q_{2}}}{\|Q_{2}\|} \end{equation*} \end{rem}
\begin{defn} We shall call unit quaternion the Hamilton's hypercomplex which satisfies: \begin{equation}
\|Q\|=1 \end{equation} \end{defn}
i.e. $a^{2}+b^{2}+c^{2}+d^{2}=1$.
\section{Several ways of defining a quaternion}
\subsection{Vector Form}
\begin{center} $Y: \mathbb{H} \longrightarrow {\mathbb{R}}^{4}$
$a + b i + c j + d k \longrightarrow (a, b, c, d);a, b, c, d \in\mathbb{R}$ \end{center}
Let us prove $Y$ is a bijective application.
$\forall x_{1},x_{2}\in \mathbb{H}$ with $x_{1}\neq x_{2}\Rightarrow Y(x_{1})\neq Y(x_{2})$.
- Therefor is injective.
$\forall y \in \mathbb{R^{4}} \exists x \in \mathbb{H}:Y(x)=y$.
- Therefor is surjective.
We've proved $Y$ is a bijective function.
Provided that$(\mathbb{H},+)$ is an Abelian group, with the sum as defined earlier; that the product is distributive with respect to the sum, $ Q_{1}(Q_{2}+Q_{3})=Q_{1}\cdot Q_{2}+Q_{1}\cdot Q_{3}$ and the product is associative $Q_{1}(Q_{2}Q_{3})=(Q_{1}Q_{2})Q_{3}$; then, quaternions with the operations of sum and product define a ring.
$(\mathbb{H},+,\ast )$ is a ring.
\subsubsection{Definitions of this notation}
\begin{enumerate} \item The quaternion$(0,0,0,0)$ is the neutral quaternion for the sum. It's obvious from the definition of sum: $(a,b,c,d)+(0,0,0,0)=(a,b,c,d)$.
\item Analogously, the neuter for the product is $(1,0,0,0)$ because $ (a,b,c,d)(1,0,0,0)=(a,b,c,d).$
\item Noting we are using our defined product: $\alpha (a,b,c,d)=(\alpha a,\alpha b,\alpha c,\alpha d).$
\item $(0,1,0,0)(0,1,0,0)=(-1,0,0,0)$ as we can see this quaternion is identified by $i^{2}$. \end{enumerate}
From now on, we'll consider: $Y(1)=(1,0,0,0)$ $Y(i)=(0,1,0,0)$ $ Y(j)=(0,0,1,0)$ $Y(k)=(0,0,0,1)$.
We wonder,is it an isomorphism? We only have to prove the following holds:
$Y [( a + b i + c j + d k)+( s + r i + t j + h k)] = Y [a + b i + c j + d k]+Y [ s + r i + tj + h k]$.
\subsection{Other Vector Form}
\begin{center} $\Gamma: \mathbb{H} \longrightarrow {\mathbb{R}}^{4}$
$a + b i + c j + d k \longrightarrow (a,\overrightarrow{v});a\in\mathbb{R}, \overrightarrow{v}$ vector in $\mathbb{R}^{3}$ \end{center}
The conjugate of this vector is: $\overline{(a,\overrightarrow{v})}=(a,- \overrightarrow{v})$. I remark the multiplication with this notation is: $ Q_{1}\cdot Q_{2}=(a,\overrightarrow{v})(a^{\prime},\overrightarrow{v_{1}} )=(aa^{\prime}-\overrightarrow{v}\cdot \overrightarrow{v_{1}},a \overrightarrow{v_{1}}+a^{\prime}\cdot \overrightarrow{v}+ \overrightarrow{v} \times \overrightarrow{v_{1}})$
\subsection{Matrix form}
The matrix form of defining a quaternion is:
\begin{center} $\Omega : \mathbb{H} \longrightarrow \mathbb{M}$
$a + b i + c j + d k \longrightarrow \left( \begin{array}{cc} a+bi & c+di \\ -c+di & a-bi \end{array} \right)$ \end{center}
In order to illustrate this notation, it's convenient to develop the following, by taking \begin{equation*} \Omega (1)=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) ,\Omega (i)=\left( \begin{array}{cc} i & 0 \\ 0 & -i \end{array} \right) ,\Omega (j)=\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right) ,\Omega (k)=\left( \begin{array}{cc} 0 & i \\ i & 0 \end{array} \right) . \end{equation*} These are called Pauli's matrixes.
The quaternion should be then: \begin{equation*} \Omega (a+bi+cj+dk)=\left( \begin{array}{cc} a+bi & c+di \\ -c+di & a-bi \end{array} \right) \end{equation*} \begin{equation*} \Omega (a+bi+cj+dk)=a\cdot \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) +b\cdot \left( \begin{array}{cc} i & 0 \\ 0 & -i \end{array} \right) +c\cdot \left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right) +d\cdot \left( \begin{array}{cc} 0 & i \\ i & 0 \end{array} \right) \end{equation*} \begin{equation*} \Omega (a+bi+cj+dk)=a \Omega (1)+b \Omega (i)+c \Omega (j)+d \Omega (k). \end{equation*} We have only left to wonder, is this function an isomorphism? It is, indeed, and the proof is equivalent to the earlier function $(Y)$.If we find the determinant of the matrix $\left( \begin{array}{cc} a+bi & c+di \\ -c+di & a-bi \end{array} \right) $ and calculate the modulus of the quaternion $a+bi+cj+dk$ we can see identical results, therefor it is easily seen this function is an isomorphism.
\subsection{Trigonometric form}
\begin{center} $\Pi :\mathbb{H}\longrightarrow \mathbb{T}$ ,$\mathbb{T}:$Trigonometric form.
$a + b i + c j + d k \longrightarrow \rho·Cis\theta + (\rho_{0}Cis\beta) j $. \end{center}
To illustrate this definition we write:
$a+bi=\rho Cis\theta $,$c+di=\rho _{0}Cis\beta $,$-c+di=-\rho _{0}Cis(-\beta )$,$a-bi=\rho Cis(-\theta )$ with $a+bi,c+di,-c+di,a-bi\in \mathbb{C}.$
Finding the modulus of the earlier complex number we obtain:
$\rho = | a + b i | = | a - b i | =\sqrt{a^{2}+b^{2}} , \rho_{0} = | c +di |
= | -c + di | =\sqrt{c^{2}+d^{2}}.$
\subsection{New Trigonometric form}
\begin{center} $\Upsilon :\mathbb{H}\longrightarrow \mathbb{J}$ ,$\mathbb{J}:$New trigonometric form.
$a + b i + c j + d k \longrightarrow \rho(Cos(\alpha)+Sin(\alpha)j)$.
$a + b i + c j + d k \longrightarrow \rho·Cjs(\alpha) $. \end{center}
\begin{rem} The following is the short way to express the earlier. We know the field of the quaternions is not commutative, so all of the transformations are equivalent. \begin{equation} \rho·Cjs(\alpha )\equiv \rho (Cos(\alpha )+Sin(\alpha ))j \end{equation} The idea of the development is to shorten the trigonometric form. \end{rem}
From complex analysis we know that: \begin{equation*}
Ln(z)=Ln|z|+iArg(z) \end{equation*} \begin{equation*} Tan^{-1}(z)=\frac{i}{2}Ln(\frac{i+z}{i-z}) \end{equation*}
\begin{defn} Applying Pythagorean Theorem in $\mathbb{R}^{4}$, ie, working with the complex axes(Fig.1), we define: \begin{equation} \rho =\sqrt{(a+bi)^{2}+(c+di)^{2}} \end{equation} \end{defn}
\begin{equation} a+bi=\rho Cos(\alpha );c+di=\rho Sen(\alpha );Tan(\alpha )=\frac{c+di}{a+bi} \end{equation} From (3.3) we have that: \begin{equation*} \alpha =Tan^{-1}(\frac{c+di}{a+bi})=Cos^{-1}(\frac{a+bi}{\rho })=Sin^{-1}( \frac{c+di}{\rho }) \end{equation*} Now, for obtaining the true value of $(\alpha )$ any of the three following trigonometric function value's must be calculated: $ Tan^{-1},Cos^{-1},Sin^{-1}$.They must be equal to $(\alpha )$.Now we wonder, why not to consider the value of $(\rho )$ as the modulus of the quaternion? The answer is the following: The trigonometric equality$(3.10)$ does not always hold, only when: $b=d=0$. And the hypercomplex were not defined under these conditions. The conclusion I reach about it, is that if these equalities hold then we can take the modulus of the quaternion as $(\rho )$. From (3.7) we have that: \begin{equation*} Tan^{-1}(\frac{c+di}{a+bi})=\frac{i}{2}Ln(\frac{a+d+(b-c)i}{a-d+(b+c)i}) \end{equation*} ie: \begin{equation*} \alpha =\frac{i}{2}Ln(\frac{a+d+(b-c)i}{a-d+(b+c)i}) \end{equation*} As we already can see we have $\rho $ which gives us (3.8), $(\alpha )$ which yields us (3.10) or (3.12). Then we can write a hypercomplex as: \begin{equation*} a+bi+cj+dk=\rho Cjs(\alpha ) \end{equation*} where $\rho ,\alpha \in \mathbb{C}.$
\begin{center} \includegraphics[width=10cm,height=7cm]{img.bmp} \end{center}
In the earlier graph we can see that a quaternion can be represented in a system of two complex coordinates. It is impossible to represent the set $ \mathbb{C}$ in a coordinate axis. I only do it to see where I obtain $(3.8)$ and $(3.9)$. At the same time, each complex axis I show has values in $ \mathbb{R}^{2}$,ie, we have the earlier graph in $\mathbb{R}^{4}.$
The remark I will write further is just a question. When defining a field in trigonometric form, do we have to reference the before field?
As we can see I've arrived to a new way of defining quaternion. Now we have: \begin{equation} \rho Cjs(\alpha )\in \mathbb{H} \end{equation} with $\rho ,\alpha \in \mathbb{C}$, then we can write $\rho =\rho _{1}(Cos(\beta )+iSin(\beta ))$ with $\rho _{1}\in \mathbb{R},\alpha \in \mathbb{C}$. So \begin{equation*} a+bi+cj+dk=\rho Cjs(\alpha )=\rho _{1}(Cos(\beta )+iSin(\beta ))(Cos(\alpha )+Sin(\alpha )j) \end{equation*} From complex analysis we know that: \begin{equation*} Sen(\alpha )=Sen(x+iy)=Sen(x)Cosh(y)+iCos(x)Senh(y) \end{equation*} \begin{equation*} Cos(\alpha )=Cos(x+iy)=Cos(x)Cosh(y)-iSen(x)Senh(y) \end{equation*} Substituting
$a+bi+cj+dk=\rho _{1}[Cos(\beta )+iSin(\beta )][Cos(x)Cosh(y)-iSen(x)Senh(y)+[Cos(x)Cosh(y)-iSen(x)Senh(y)]j].$
yields: \begin{equation*} a=\rho _{1}Cos(x)Cosh(y) \end{equation*} \begin{equation*} b=-\rho _{1}Sin(x)Sinh(y) \end{equation*} \begin{equation*} c=\rho _{1}Sin(x)Cosh(y) \end{equation*} \begin{equation*} d=\rho _{1}Cos(x)Sinh(y) \end{equation*} William Rowan Hamilton in ''On a New Species of Imaginary Quantities Connected with a Theory of Quaternions'', wrote: \begin{equation*} a=\rho Cos(\theta ) \end{equation*} \begin{equation*} b=\rho Sin(\theta )Cos(\vartheta ) \end{equation*} \begin{equation*} c=\rho Sin(\theta )Sin(\theta )Cos(\psi ) \end{equation*} \begin{equation*} d=\rho Sin(\theta )Sin(\theta )Sin(\psi ) \end{equation*}
\subsection{Trigonometric Matrix Form}
\begin{center} $\Psi: \mathbb{H} \longrightarrow \mathbb{M}$
$a + bi+ cj + dk \longrightarrow \left( \begin{array}{cc} \rho Cis\theta & \rho_{0}Cis\beta \\ -\rho_{0} Cis\beta & \rho Cis(-\theta) \end{array} \right) $ \end{center}
I develop this kind of definition mostly for a practical work, such as multiplying or dividing, due to these operations are easily performed in trigonometric form.
Let's see that:
\begin{center} $\left( \begin{array}{cc} \rho Cis\theta & \rho_{0} Cis\beta \\ -\rho_{0}Cis\beta & \rho Cis(-\theta) \end{array} \right)$ = $\left( \begin{array}{cc} \rho Cis\theta & 0 \\ 0 & \rho Cis(-\theta) \end{array} \right)$ $+$ $\left( \begin{array}{cc} 0 & \rho_{0} Cis\beta \\ -\rho_{0} Cis\beta & 0 \end{array} \right) $.
$\left( \begin{array}{cc} \rho Cis\theta & \rho_{0} Cis\beta \\ -\rho_{0}Cis\beta & \rho Cis(-\theta) \end{array} \right)$ = $\left( \begin{array}{cc} \rho Cos\theta & 0 \\ 0 & \rho Cos(-\theta) \end{array} \right) $ $+$ $\left( \begin{array}{cc} i\rho Sen\theta & 0 \\ 0 & i\rho Sen(-\theta) \end{array} \right) $ $+$ $\left( \begin{array}{cc} 0 & \rho_{0} Cos\beta \\ -\rho_{0} Cos\beta & 0 \end{array} \right) $ $+$ $\left( \begin{array}{cc} 0 & \rho_{0}Sen\beta \\ -\rho_{0}Sen\beta & 0 \end{array} \right) $. \end{center}
\subsection{Logarithmic Form}
Let us define the following map:
\begin{center} $\Gamma: \mathbb{H} \longrightarrow \mathbb{L}$
$a+bi+cj+dk \longrightarrow \ln ab^{i}c^{j}d^{k}$ \end{center}
This way of definition is to convert the elements $a,b,c,d\in \mathbb{R}$ in natural logarithms. \begin{equation*} a=\ln a,b=\ln b,c=\ln c,d=\ln d \end{equation*} (the upper is an abuse of notation) Obviously it's possible to think each part of the quaternion as real logarithms, because they are real. This notation is only possible when the real elements are not zero.
\subsection{Exponential Form}
\begin{center} $\Delta: \mathbb{H} \longrightarrow \mathbb{E}$
$a+bi+cj+dk \longrightarrow \rho e^{i\theta}+\rho_{0}e^{i\beta}j.$ \end{center}
Let us define the exponential form . First, we know from complex that:$ e^{z}=e^{x}(Cosy+iSeny)$,this means that \begin{equation} e^{yi}=Cosy+iSeny=Cisy. \end{equation}
Then: $Cis\theta =e^{i\theta},Cis\beta =e^{i\beta}.$
As we know a $a+bi+cj+dk$ can be written as $\rho Cis\theta +\rho _{0}Cis\beta j.$
$a+bi+cj+dk=\rho Cis\theta + \rho_{0} Cis\beta j$, by $(3.5),$ $ a+bi+cj+dk=\rho Cis\theta + \rho_{0} Cis\beta j=\rho e^{i\theta}+\rho_{0}e^{i\beta}j.$
Now calculate$(a+bi+cj+dk)^{n}.$
$(a+bi+cj+dk)^{n}=(\rho e^{i\theta }+\rho _{0}e^{i\beta }j)^{n},$developing Newton's binomial we have: $(\rho e^{i\theta}+\rho_{0}e^{i\beta}j)^{n}=$
$=(\rho e^{i\theta})^{n}+ C_{n}^{1}(\rho e^{i\theta})^{n-1}\rho_{0}e^{i\beta}j+C_{n}^{2}(\rho e^{i\theta})^{n-2}(\rho_{0}e^{i\beta}j)^{2}+...C_{n}^{n-1}\rho e^{i\theta}(\rho_{0}e^{i\beta}j)^{n-1}+(\rho_{0}e^{i\beta}j)^{n}. $ \begin{eqnarray} (\rho e^{i\theta}+\rho_{0}e^{i\beta}j)^{n}= \sum_{h=0}^{n}\left( \begin{array}{c} n \\ h \end{array} \right)(\rho e^{i\theta})^{n-h}(\rho_{0}e^{i\beta}j)^{h} \end{eqnarray}
\begin{eqnarray} (\rho e^{i\theta}+\rho_{0}e^{i\beta}j)^{n}=(\rho e^{i\theta})^{n}\sum_{h=0}^{n}\left( \begin{array}{c} n \\ h \end{array} \right)(\frac{\rho_{0}}{\rho}e^{i(\beta-\theta)}j)^{h} \end{eqnarray}
Now we must verify by induction if $(3.7)$ holds.
Obviously it holds in the case $n=1.$ Suppose it holds for $n=k$ and prove it is true in the case $n=k+1.$ Now, with the new definition in trigonometric form we'll prove the following:
\begin{thm} We know $a+bi+cj+dk=\rho Cjs(\alpha)$, then $(a+bi+cj+dk)^{n}=(\rho Cjs(\alpha))^{n}=\rho^{n}Cjs(n \alpha).$ \end{thm}
\section{Functions of a hypercomplex variable}
If a variable $w$ is related with $z$ such that to each value of $z$ in $ \mathbb{H}$ corresponds a value or set of defined values of $w$, then $w$ is a function of the hypercomplex variable $z$, $w=f(z)$. If $z=a+bi+cj+dk$ and $w=u+vi+sj+tk$ with the values of $a,b,c,d,u,v,s,t\in \mathbb{R}$. $ u+vi+sj+tk=f(a+bi+cj+dk)$, and each of the real variables $u,v,s,t\in \mathbb{R}$ are determined by the real quartet $a,b,c,d$. That is to say, $ u=u(a,b,c,d)$,$v=v(a,b,c,d)$, $s=s(a,b,c,d)$,$t=t(a,b,c,d)$.
Example 1: $w=z^{2}+5.$
$u + v i + s j + t k = (a + b i + c j + d k)^{2} + 5,$
$u+vi+sj+tk=a^{2}-b^{2}-c^{2}-d^{2}+2abi+2acj+2adk+5.$ Then:
$u(a, b, c, d) = a^{2} - b^{2} - c^{2} - d^{2}+5,$
$v (a, b, c, d) = 2 a$, $s (a, b, c, d) = 2 a c$, $t(a, b, c, d) =2ad.$
\subsection{Limit Definition}
Let $f(z)$ a function defined in all the points in some neighborhood of $ z_{0}$. We say that $w_{0}$ is the limit $f(z)$, when $z$ tends to $z_{0}$, \begin{equation*} \lim_{z\rightarrow z_{0}}f(z)=w_{0}. \end{equation*} That is, for all positive epsilon exists a positive number lambda such that: \begin{equation*}
|f(z)-w_{0}|<\epsilon \end{equation*} when: \begin{equation*}
|z-z_{0}|<\lambda (z\neq z_{0}). \end{equation*}
Suppose that, \begin{equation*} \lim_{z\rightarrow z_{0}}f(z)=u_{0}+v_{0}i+s_{0}j+t_{0}k \end{equation*} where $f(z)=u+vi+sj+tk,$ $z=a+bi+cj+dk,$ $z_{0}=a_{0}+b_{0}i+c_{0}j+d_{0}k.$ Then by the inequality it becomes in:
\begin{equation*}
|u+vi+sj+tk-(u_{0}+v_{0}i+s_{0}j+t_{0}k)|<\epsilon \end{equation*} when: \begin{equation*}
|a+bi+cj+dk-(a_{0}+b_{0}i+c_{0}j+d_{0}k)|<\lambda . \end{equation*} Making algebraic transformations, $(4.5),(4.6)$ yield: \begin{equation*}
|(u-u_{0})+(v-v_{0})i+(s-s_{0})j+(t-t_{0})k|<\epsilon \end{equation*} when: \begin{equation*}
|(a-a_{0})+(b-b_{0})i+(c-c_{0})j+(d-d_{0})k|<\lambda . \end{equation*} But from $(4.7)$ and $(4.8)$ we have, applying $(2.1)$: \begin{equation*} \sqrt{(u-u_{0})^{2}+(v-v_{0})^{2}+(s-s_{0})^{2}+(t-t_{0})^{2}}<\epsilon \end{equation*} when: \begin{equation*} \sqrt{(a-a_{0})^{2}+(b-b_{0})^{2}+(c-c_{0})^{2}+(d-d_{0})^{2}}<\lambda . \end{equation*}
Uniqueness of the limit of a hypercomplex function: Suppose there exist two limit points $w_{0},w_{1}$ $(w_{0}\neq w_{1}).$ By definition of limit: \begin{equation*}
|f(z)-w_{0}|<\frac{\epsilon}{2} , when |z-z_{0}|<\lambda \end{equation*} \begin{equation*}
|f(z)-w_{1}|<\frac{\epsilon}{2} , when |z-z_{1}|<\lambda \end{equation*}
Let's work with the real number$|w_{1}-w_{0}|.$ The following step is obvious:
\begin{equation*}
|w_{1}-w_{0}|=|w_{1}-w_{0}+f(x)-f(x)|=|-f(x)+w_{1}+f(x)-w_{0}|=|-(f(x)-w_{1})+f(x)-w_{0}|\leq \end{equation*} \begin{equation*}
\leq|-(f(x)-w_{1})|+|f(x)-w_{0}|=|f(x)-w_{1}|+|f(x)-w_{0}|<\frac{\varepsilon }{2}+\frac{\varepsilon}{2}<\epsilon \end{equation*} \begin{equation*}
|w_{1}-w_{0}|<\epsilon , \end{equation*}
but as $w_{0},w_{1}$ are constant $|w_{1}-w_{0}|$ cannot be made as small as one wants. Then, $w_{0}=w_{1}.$
Define a function $f(z)=w=u+vi+sj+tk$ such that:
\begin{equation*} f^{\prime}( z ) = \lim_{\nabla a \rightarrow 0}\frac{u(a+\nabla a)-u(a)}{ \nabla a}+\lim_{\nabla a \rightarrow 0}\frac{v(a+\nabla a)-v(a)}{\nabla a} i + \lim_{\nabla a \rightarrow 0}\frac{s(a+\nabla a)-s(a)}{\nabla a}j + \lim_{\nabla a \rightarrow 0}\frac{t(a+\nabla a)-t(a)}{\nabla a}k. \end{equation*}
\begin{equation*} f^{\prime}( z ) = \lim_{\nabla b \rightarrow 0}\frac{u(b+\nabla b)-u(b)}{ \nabla bi}+ \lim_{\nabla b \rightarrow 0}\frac{v(b+\nabla b)-v(b)}{\nabla bi} i + \lim_{\nabla b \rightarrow 0}\frac{s(b+\nabla b)-s(b)}{\nabla bi}j + \lim_{\nabla b \rightarrow 0}\frac{t(b+\nabla b)-t(b)}{\nabla bi} k. \end{equation*}
\begin{equation*} f^{\prime}( z ) =\lim_{\nabla c \rightarrow 0}\frac{u(c+\nabla c)-u(c)}{ \nabla cj} + \lim_{\nabla c \rightarrow 0}\frac{v(c+\nabla c)-v(c)}{\nabla cj } i+ \lim_{\nabla c \rightarrow 0}\frac{s(c+\nabla c)-s(c)}{\nabla cj}j + \lim_{\nabla c \rightarrow 0}\frac{t(c+\nabla c)-t(c)}{\nabla cj}k. \end{equation*}
\begin{equation*} f^{\prime}( z ) =\lim_{\nabla d \rightarrow 0}\frac{u(d+\nabla d)-u(d)}{ \nabla dk} + \lim_{\nabla d \rightarrow 0}\frac{v(d+\nabla d)-v(d)}{\nabla dk }i + \lim_{\nabla d \rightarrow 0}\frac{s(d+\nabla d)-s(d)}{\nabla dk}j + \lim_{\nabla d \rightarrow 0}\frac{t(d+\nabla d)-t(d)}{\nabla dk} k. \end{equation*}
Then:
\begin{equation} f^{\prime}( z ) = \frac{\partial u}{\partial a}+ \frac{\partial v}{\partial a }i + \frac{\partial s}{\partial a}j + \frac{\partial t}{\partial a}k. \end{equation}
\begin{equation} f^{\prime}( z ) = - \frac{\partial u}{\partial b}i+ \frac{\partial v}{ \partial b}+ \frac{\partial s}{\partial b}k -\frac{\partial t}{\partial b} j. \end{equation}
\begin{equation} f^{\prime}( z ) = - \frac{\partial u}{\partial c} j -\frac{\partial v}{ \partial c} k +\frac{\partial s}{\partial c} + \frac{\partial t}{\partial c} i. \end{equation}
\begin{equation} f^{\prime}( z ) = - \frac{\partial u}{\partial d} k +\frac{\partial v}{ \partial d} j - \frac{\partial s}{\partial d}i + \frac{\partial t}{\partial d }. \end{equation}
From the upper unqualified we conclude that a hypercomplex function is analytic or integer if:
\begin{eqnarray} \frac{\partial u}{\partial a}=\frac{\partial v}{\partial b}= \frac{\partial s }{\partial c}=\frac{\partial t}{\partial d}. \end{eqnarray}
\begin{eqnarray} \frac{\partial v}{\partial a} = -\frac{\partial u}{\partial b}=\frac{ \partial t}{\partial c}=-\frac{\partial s}{\partial d}. \end{eqnarray}
\begin{eqnarray} \frac{\partial s}{\partial a}=-\frac{\partial t}{\partial b} = - \frac{ \partial u}{\partial c} =\frac{\partial v}{\partial d}. \end{eqnarray}
\begin{eqnarray} \frac{\partial t}{\partial a}=\frac{\partial s}{\partial b} =-\frac{\partial v}{\partial c} = - \frac{\partial u}{\partial d}. \end{eqnarray}
We call the latter the generalized theorem of Cauchy-Riemann.
References:
[1] Kurochov, Algebra Lineal.
[2] Teresita Noriega, Algebra Lineal II.
[3] Ebbinghaus et al, Number, Stringer, 1991.
[4] Peter Grogono, Rotation with Quaternions, December 2001.
\end{document} | arXiv |
Relativity and Equivalence in Hilbert Space: A Principle-Theory Approach to the Aharonov–Bohm Effect
Guy Hetzroni
Foundations of Physics 50 (2):120-135 (2020)
The Open University of Israel
This paper formulates generalized versions of the general principle of relativity and of the principle of equivalence that can be applied to general abstract spaces. It is shown that when the principles are applied to the Hilbert space of a quantum particle, its law of coupling to electromagnetic fields is obtained. It is suggested to understand the Aharonov-Bohm effect in light of these principles, and the implications for some related foundational controversies are discussed.
Keywords Principle of Equivalence Gauge Symmetries Aharonov-Bohn Effect Quantum Theory
Gauge Theories in Philosophy of Physical Science
Symmetry in Physics in Philosophy of Physical Science
ISBN(s)
DOI 10.1007/s10701-020-00322-y
link.springer.com (no proxy)
rdcu.be (no proxy)
What is Structural Realism?James Ladyman - 1998 - Studies in History and Philosophy of Science Part A 29 (3):409-424.
The Unreasonable Effectiveness of Mathematics in the Natural Sciences.Eugene Wigner - 1960 - Communications in Pure and Applied Mathematics 13:1-14.
Remodelling Structural Realism: Quantum Physics and the Metaphysics of Structure. [REVIEW]Steven French & James Ladyman - 2003 - Synthese 136 (1):31-56.
World Enough and Space‐Time: Absolute Versus Relational Theories of Space and Time.Robert Toretti & John Earman - 1989 - Philosophical Review 101 (3):723.
Understanding Electromagnetism.Gordon Belot - 1998 - British Journal for the Philosophy of Science 49 (4):531-555.
Gauge Symmetries, Symmetry Breaking, and Gauge-Invariant Approaches.Philipp Berghofer, Jordan Francois, Simon Friederich, Henrique Gomes, Guy Hetzroni, Axel Maas & René Sondenheimer - unknown
Nonlocality and the Aharonov-Bohm Effect.Richard Healey - 1997 - Philosophy of Science 64 (1):18-41.
Quantum Aspects of the Equivalence Principle.Y. Aharonov & G. Carmi - 1973 - Foundations of Physics 3 (4):493-498.
A Versus B! Topological Nonseparability and the Aharonov-Bohm Effect.Tim Oliver Eynck, Holger Lyre & Nicolai von Rummell - unknown
A Versus B! Topological Nonseparability and the Aharonov-Bohm Effect.Tim Oliver Eynck, Holger Lyre & Nicolai von Rummell - 2001
Why is the Transference Theory of Causation Insuffcient? The Challenge of the Aharonov-Bohm Effect.Vincent Ardourel & Alexandre Guay - 2018 - Studies in History and Philosophy of Modern Physics 63:12-23.
$\mathfrak{D}$ -Differentiation in Hilbert Space and the Structure of Quantum Mechanics.D. J. Hurley & M. A. Vandyck - 2009 - Foundations of Physics 39 (5):433-473.
Does Quantum Mechanics Clash with the Equivalence Principle—and Does It Matter?Elias Okon & Craig Callender - 2011 - European Journal for Philosophy of Science 1 (1):133-145.
Healey on the Aharonov-Bohm Effect.Tim Maudlin - 1998 - Philosophy of Science 65 (2):361-368.
Gauge Theories and Holisms.Richard Healey - 2004 - Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 35 (4):619-642.
The Transition From Quantum Field Theory to One-Particle Quantum Mechanics and a Proposed Interpretation of Aharonov–Bohm Effect.Benliang Li, Daniel W. Hewak & Qi Jie Wang - 2018 - Foundations of Physics 48 (7):837-852.
New Perspectives on the Aharonov-Bohm Effect.Mostafa El Demery - unknown
Two Approaches to Fractional Statistics in the Quantum Hall Effect: Idealizations and the Curious Case of the Anyon.Elay Shech - 2015 - Foundations of Physics 45 (9):1063-1100.
Does the Aharonov–Bohm Effect Exist?Timothy H. Boyer - 2000 - Foundations of Physics 30 (6):893-905.
Gauges: Aharonov, Bohm, Yang, Healey.Stephen Leeds - 1999 - Philosophy of Science 66 (4):606-627. | CommonCrawl |
Journals SciPost Physics Proceedings Issue 4
SciPost Physics Proceedings
SciPost Physics Proceedings issue 4
Previous issue | Issue 3
Issue 5 | Next issue
4th International Conference on Holography, String Theory and Discrete Approach in Hanoi
Event dates: from 2020-08-03 to 2020-08-08.
Due to the current corona virus outbreak, this conference will be held entirely online. Because participants do not need to come, physically, to Vietnam, it has been possible to invite many speakers to give talks via Zoom. As a result, there are a large number of scheduled presentations, on many topics, including asymptotic symmetry, double copy and soft-theorem, among others. For this reason, the organization committee has arranged the conference proceedings. Hopefully, this will be of help to everyone who wishes to contribute (this conference is supported by Beihang University).
Conference website: https://pias.edu.vn/international-conference-on-holography-string-theory-and-discrete-approach-in-hanoi-vietnam/
(Guest) Fellows responsible for this Issue
Prof. Dimitrios Giataganas
Dr Shingo Takeuchi
Properties of non-relativistic string theory
E.A. Bergshoeff, J. Lahnsteiner, L. Romano, C. Simsek
SciPost Phys. Proc. 4, 001 (2021) · published 13 August 2021 |
Toggle abstract
· pdf
We show how Newton-Cartan geometry can be generalized to String Newton-Cartan geometry which is the geometry underlying non-relativistic string theory. Several salient properties of non-relativistic string theory in this geometric background are presented and a discussion of possible research for the future is outlined.
De Sitter entropy as holographic entanglement entropy
Nikolaos Tetradis
We review the results of refs. [1, 2], in which the entanglement entropy in spaces with horizons, such as Rindler or de Sitter space, is computed using holography. This is achieved through an appropriate slicing of anti-de Sitter space and the implementation of a UV cutoff. When the entangling surface coincides with the horizon of the boundary metric, the entanglement entropy can be identified with the standard gravitational entropy of the space. For this to hold, the effective Newton's constant must be defined appropriately by absorbing the UV cutoff. Conversely, the UV cutoff can be expressed in terms of the effective Planck mass and the number of degrees of freedom of the dual theory. For de Sitter space, the entropy is equal to the Wald entropy for an effective action that includes the higher-curvature terms associated with the conformal anomaly. The entanglement entropy takes the expected form of the de Sitter entropy, including logarithmic corrections.
From Rindler fluid to dark fluid on the holographic cutoff surface
Rong-Gen Cai, Gansukh Tumurtushaa, Yun-Long Zhang
As an approximation to the near horizon regime of black holes, the Rindler fluid was proposed on an accelerating cutoff surface in the flat spacetime. The concept of the Rindler fluid was then generalized into a flat bulk with the cutoff surface of the induced de Sitter and FRW universe, such that an effective description of dark fluid in the accelerating universe can be investigated.
Two faces of Hawking radiation and thin-shell emission: pair-creation vs. tunneling
Dong-han Yeom
We first revisit Hartle and Hawking's path integral derivation of Hawking radiation. In the first point of view, we interpret that a particle-antiparticle pair is created and the negative energy antiparticle falls into the black hole. On the other point of view, a particle inside the horizon, or beyond the Einstein-Rosen bridge, tunnels to outside the horizon, where this computation requires the analytic continuation of the time. These two faces of the Hawking radiation process can be extended to not only particles but also fields. As a concrete example, we study the thin-shell tunneling process; by introducing the antishell as a negative tension shell, we can give the consistent interpretation for two pictures, where one is a tunneling from inside to outside the horizon using instantons, while the other is a shell-antishell pair-creation. This shows that the Euclidean path integral indeed carries vast physical implications not only for perturbative, but also for non-perturbative processes.
Charges in the extended BMS algebra: Definitions and applications
Massimo Porrati
This is a review of selected topics from recent work on symmetry charges in asymptotically flat spacetime done by the author in collaboration with U. Kol and R. Javadinezhad. First we reinterpret the reality constraint on the boundary graviton as the gauge fixing of a new local symmetry, called dual supertranslations. This symmetry extends the BMS group and bears many similarities to the dual (magnetic) gauge symmetry of electrodynamics. We use this new gauge symmetry to propose a new description of the TAUB-NUT space that does not contain closed time-like curves. Next we summarize progress towards the definition of Lorentz and super-Lorentz charges that commute with supertranslations and with the soft graviton mode.
A sharp transition in quantum chaos and thermodynamics of mass deformed SYK model
Tomoki Nosaka
We review our recent work [arXiv:2009.10759] where we studied the chaotic property of the two coupled Sachdev-Ye-Kitaev systems exhibiting a Hawking-Page like phase transition. By computing the out-of-time-ordered correlator in the large N limit by using the bilocal field formalism, we found that the chaos exponent of this model shows a discontinuous fall-off at the phase transition temperature. Hence in this model the Hawking-Page like transition is correlated with a transition in chaoticity, as expected from the relation between a black hole geometry and the chaotic behavior in the dual field theory.
Analogous Hawking radiation in butterfly effect
Takeshi Morita
We propose that Hawking radiation-like phenomena may be observed in systems that show butterfly effects. Suppose that a classical dynamical system has a Lyapunov exponent $\lambda_L$, and is deterministic and non-thermal ($T=0$). We argue that, if we quantize this system, the quantum fluctuations may imitate thermal fluctuations with temperature $T \sim \hbar \lambda_L/2 \pi $ in a semi-classical regime, and it may cause analogous Hawking radiation. We also discuss that our proposal may provide an intuitive explanation of the existence of the bound of chaos proposed by Maldacena, Shenker and Stanford.
Superradiant instability and black resonators in AdS
Takaaki Ishii
Rapidly rotating Myers-Perry-AdS black holes are unstable against rotational superradiance. From the onset of the instability, cohomogeneity-1 black resonators are constructed in five-dimensional asymptotically AdS space. By using the cohomogeneity-1 metric, perturbations of the cohomogeneity-1 black resonators are also studied.
Position-dependent mass quantum systems and ADM formalism
Davood Momeni
The classical Einstein-Hilbert (EH) action for general relativity (GR) is shown to be formally analogous to the classical system with position-dependent mass (PDM) models. The analogy is developed and used to build the covariant classical Hamiltonian as well as defining an alternative phase portrait for GR. The set of associated Hamilton's equations in the phase space is presented as a first-order system dual to the Einstein field equations. Following the principles of quantum mechanics, I build a canonical theory for the classical general. A fully consistent quantum Hamiltonian for GR is constructed based on adopting a high dimensional phase space. It is observed that the functional wave equation is timeless. As a direct application, I present an alternative wave equation for quantum cosmology. In comparison to the standard Arnowitt-Deser-Misner(ADM) decomposition and quantum gravity proposals, I extended my analysis beyond the covariant regime when the metric is decomposed into the 3+1 dimensional ADM decomposition. I showed that an equal dimensional phase space can be obtained if one applies ADM decomposed metric.
Hawking flux of 4D Schwarzschild blackhole with supertransition correction to second-order
Shingo Takeuchi
Former part of this article is the proceeding for my talk on arXiv:2004.07474, which is a report on the issue in the title of this article. Later part is the detailed description of arXiv:2004.07474.
Classical boundary field theory of Jacobi sigma models by Poissonization
Ion V. Vancea
In this paper, we are going to construct the classical field theory on the boundary of the embedding of $\mathbb{R} \times S^{1}$ into the manifold $M$ by the Jacobi sigma model. By applying the poissonization procedure and by generalizing the known method for Poisson sigma models, we express the fields of the model as perturbative expansions in terms of the reduced phase space of the boundary. We calculate these fields up to the second order and illustrate the procedure for contact manifolds.
Linear stability of Einstein and de Sitter universes in the quadratic theory of modified gravity
Mudhahir Al-Ajmi
We consider the Einstein static and the de Sitter universe solutions and examine their instabilities in a subclass of quadratic modified theories for gravity. This modification proposed by Nash is an attempt to generalize general relativity. Interestingly, we discover that the Einstein static universe is unstable in the context of the modified gravity. In contrast to Einstein static universe, the de Sitter universe remains stable under metric perturbation up to the second order.
Jiang Long
This is an introduction to the relationship between area law and OPE blocks in conformal field theory.
Matthew J. Lake
The scale of quantum mechanical effects in matter is set by Planck's constant, $\hbar$. This represents the quantisation scale for material objects. In this article, we present a simple argument why the quantisation scale for space, and hence for gravity, may not be equal to $\hbar$. Indeed, assuming a single quantisation scale for both matter and geometry leads to the `worst prediction in physics', namely, the huge difference between the observed and predicted vacuum energies. Conversely, assuming a different quantum of action for geometry, $\beta \ll \hbar$, allows us to recover the observed density of the Universe. Thus, by measuring its present-day expansion, we may in principle determine, empirically, the scale at which the geometric degrees of freedom should be quantised.
SciPost Physics Proceedings is published by the SciPost Foundation under the journal doi: 10.21468/SciPostPhysProc and ISSN 2666-4003.
SciPost Physics Proceedings has been awarded the DOAJ Seal from the Directory of Open Access Journals.
All content in SciPost Physics Proceedings is deposited and permanently preserved in the CLOCKSS archive | CommonCrawl |
Fine-grained recognition of plants from images
Milan Šulc ORCID: orcid.org/0000-0002-6321-01311 &
Jiří Matas1
Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild".
We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild".
The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.
Recognition of natural objects in the surrounding environment has been of great importance for the humankind since time immemorial. The desire to understand and describe the living nature lead scientists to create systems of biological classification, counting an enormous number of categories and species. For illustration: while the 10th edition of Linnaeus's Systema Naturae [1] describes about 6000 plant species [2], currently the number of published and accepted plant species in the world is over 310,000 [3].
We study and develop computer vision algorithms to assist or fully automate the plant identification process. From the machine learning point of view, plant recognition is a fine-grained classification task with high intra-class variability and often small inter-class differences, which are often related to the taxonomic hierarchical classification.
Computer vision methods for plant recognition have a number of applications, including mobile field guides using computer vision to automate or speed up the identification process, image data processing for biological databases, automatic detection, registration and mapping of plants from publicly available data, automation in agriculture, etc.
The rest of this section contains a review of the state-of-the art in plant recognition and in the related computer vision areas—texture recognition and deep learning. Our previously published methods and experiments [4,5,6,7,8], on which this article is based, are not mentioned in this section but rather described in more detail, extended and discussed in the rest of the article.
Plant recognition
Interest in methods for visual classification of plants has grown recently [9,10,11,12] as devices equipped with cameras became ubiquitous, making intelligent field guides, education tools and automation in forestry and agriculture practical. Belhumeur et al. [9] discuss the use of such a system in the field allowing a botanist to quickly search entire collections of plant species—a process that previously took hours can now be done in seconds. Plant recognition has been posed, almost without exceptions [13, 14], as recognition of photos depicting solely a specific plant organ such as flower, bark, fruit, leaf or their combination [9,10,11,12, 15,16,17,18,19,20,21,22,23,24,25,26,27].
Leaf recognition
Leaf recognition has been by far the most popular approach to plant recognition and a wide range of methods has been reported in the literature [9, 11, 12, 15,16,17,18,19,20,21,22,23,24,25,26,27]. Recognition of leaves usually refers only to recognition of broad leaves, needles are treated separately. Several techniques have been proposed for leaf description, often based on combining features of different character (shape features, colour features, etc.).
A bag of words model with Scale Invariant Feature Transform (SIFT [28]) descriptors was applied to leaf recognition by Fiel and Sablatnig [11]. Several shape methods have been compared on leaf recognition by Kadir et al. [15]. Of the compared methods—geometric features, moment invariants, Zernike moments and polar Fourier Transform—the last performed best on an unpublished dataset.
Kumar et al. [12] describe Leafsnap,Footnote 1 a computer vision system for automatic plant species identification, which has been developed from the earlier plant identification system by Agarwal et al. [16] and Belhumeur et al. [9]. Kumar et al. [12] introduced a pre-filter on input images, numerous speed-ups and additional post-processing within the segmentation algorithm, the use of a simpler and more efficient curvature-based recognition algorithm. On the introduced Leafsnap database of 184 tree species, their recognition system finds correct matches among the top 5 results for 96.8% queries from the dataset. The resulting electronic Leafsnap field guide is available as a mobile app for iOS devices. The leaf images are processed on a server, internet connection is thus required for recognition, which may cause problems in natural areas with slow or no data connection. Another limit is the need to take the photos of the leaves on a white background.
Wu et al. [17] proposed a probabilistic neural network for leaf recognition using 12 digital morphological features, derived from 5 basic features (diameter, physiological length, physiological width, leaf area, leaf perimeter). The authors collected a publicly available plant leaf database named Flavia.
Kadir et al. [24] prepared the Foliage dataset, consisting of 60 classes of leaves, each containing 120 images. The best reported result on this dataset reported by Kadir et al. [18] was achieved by a combination of shape, vein, texture and colour features processed by principal component analysis before classification by a probabilistic neural network.
Söderkvist [25] proposed a visual classification system of leaves and collected the so called Swedish dataset containing scanned images of 15 classes of Swedish trees. Qi et al. [29] achieve 99.38% accuracy on the Swedish dataset using a texture descriptor called Pairwise Rotation Invariant Co-occurrence Local Binary Patterns [27] with Support Vector Machine (SVM) classification.
Novotný and Suk [22] proposed a leaf recognition system, using Fourier descriptors of the leaf contour normalised to translation, rotation, scaling and starting point of the boundary. The authors also collected a large leaf dataset called Middle European Woods (MEW) containing 153 classes of native or frequently cultivated trees and shrubs in Central Europe. Their method achieves 84.92% accuracy when the dataset is split into equally sized training and test set. MEW and Leafsnap are the most challenging leaf recognition datasets.
One possible application of leaf description is the identification of a disease. Pydipati et al. [30] proposed a system for citrus disease identification using color co-occurrence method (CCM), achieving accuracies of over 95% for 4 classes (normal leaf samples and samples with a greasy spot, melanose, and scab).
Tree bark recognition
The problem of automatic tree identification from photos of bark can be naturally formulated as texture recognition.
Several methods have been proposed and evaluated on datasets which are not publicly available. Chi et al. [31] proposed a method using Gabor filter banks. Wan et al.[32] performed a comparative study of bark texture features: the grey level run-length method, co-occurrence matrices method, histogram method and auto-correlation method. The authors also show that the performance of all classifiers improved significantly when color information was added. Song et al. [33] presented a feature-based method for bark recognition using a combination of Grey-Level Co-occurrence Matrix (GLCM) and a binary texture feature called long connection length emphasis. Huang et al. [34] used GLCM together with fractal dimension features for bark description. The classification was performed by artificial neural networks.
Since the image data used in the experiments discussed above is not available, it is difficult to assess the quality of the results and to perform comparative evaluation.
Fiel and Sablatnig [11] proposed methods for automated identification of tree species from images of the bark, leaves and needles. For bark description they created a Bag of Words with SIFT descriptors in combination with GLCM and wavelet features. SVM with radial basis function kernel was used for classification. They introduced the Österreichische Bundesforste AG (Austrian Federal Forests) bark dataset consisting of 1182 photos from 11 classes. We refer to this dataset as the AFF bark dataset. A recognition accuracy of 64.2 and 69.7% was achieved on this dataset for training sets with 15 and 30 images per class.
Fiel and Sablatnig also describe an experiment with two human experts, a biologist and a forest ranger, both employees of Österreichische Bundesforste AG. Their classification rate on a subset of the dataset with 9 images per class, 99 images in total, was 56.6% (biologist) and 77.8% (forest ranger).
Boudra et al. [35] review and compare different variants of multi-scale Local Binary Patterns based texture descriptors and evaluate their performance in tree bark image retrieval.
Plant identification from diverse images
Recognition of plants given several images of different content-types, such as different plant organs or the entire plant, should be in principle more reliable than recognition only given a one image of one specific plant organ such as leaf or bark. On the other hand, the task is more challenging if an image of an unspecified organ is given. Such problems are posed by the Plant Identification task of the LifeCLEF workshop [14, 36, 37], known as the PlantCLEF challenge, since 2014. The challenge tasks have slightly changed every year. Our contributions to the 2016 and 2017 challenges will be described later in this article.
The 2016 [38] edition of PlantCLEF was evaluated as an open-set recognition problem, i.e. "a problem in which the recognition system has to be robust to unknown and never seen categories". Each image in the task belongs to one of the 7 content-types: leaf, leaf scan, flower, fruit, stem, branch, or entire plant. Albeit the content-type is available in the meta-data, similarly to last years, the best scoring results use the same deep networks for all types of content [39,40,41]. Ge et al. [42] showed that in this task generic Convolutional Neural Network (CNN) features perform better than content-specific CNN features, and that their combination improves the accuracy. Choi et al. [41] showed that bagging of several generic CNNs improves the accuracy as well, winning the PlantCLEF 2015 challenge.
PlantCLEF 2017 [43] addressed a practical problem of training a very fine grained classifier (10,000 species) from data with noisy labels: Besides 256 thousand labelled images in the "trusted" training set, the organizers also provided URLs to more than 1.4 million weakly-labelled web images in the "noisy" training set, obtained by Google and Bing image search. The evaluation of the task is performed on a test set containing 25,170 images of 13,471 observations (specimen).
Pl@ntNet [13] is another content-type based plant recognition system. It is also an collaborative information system providing an image sharing and retrieval application for plant identification. It has been developed by scientists from four French research organizations (Cirad, INRA, INRIA and IRD) and the Tela Botanica network. The Pl@ntNet-identify Tree Database provides identification by combining information from images of the habitat, flower, fruit, leaf and bark. The exact algorithms used in the Pl@ntNet-identify web service [44] and their accuracies are not publicly documented. There is also a Pl@ntNet mobile app [45], an image sharing and retrieval application for the identification of plants.
Texture recognition
Texture information is an essential feature for recognition of many plant organs. Texture analysis is a well-established problem with a large number of existing methods, many of them being described in surveys [46,47,48,49]. Texture itself is hard to define. There are various definitions of visual texture, but they often lack formality and completeness. For illustration, let us quote an informal definition by Hawkins [50]:
The notion of texture appears to depend upon three ingredients: (1) some local "order" is repeated over a region which is large in comparison the the order's size, (2) the order consists in the non-random arrangement of elementary parts, and (3) the parts are roughly uniform entities having approximately the same dimensions everywhere within the textured region.
Here we only review the recent development and the state-of-the-art.
Several recent approaches to texture recognition report excellent results on standard datasets, many of them working only with image intensity and ignoring the available color information. A number of approaches is based on the popular local binary patterns (LBP) [51, 52], such as the recent Pairwise Rotation Invariant Co-occurrence Local Binary Patterns of Qi et al. [27] or the Histogram Fourier Features of Ahonen et al. [53, 54]. A cascade of invariants computed by scattering transforms was proposed by Sifre and Mallat [55] in order to construct an affine invariant texture representation. Mao et al. [56] use a bag-of-words model with a dictionary of so called active patches: raw intensity patches that undergo further spatial transformations and adjust themselves to best match the image regions. While the Active Patch Model doesn't use color information, the authors claim that adding color will further improve the results. The method of Cimpoi et al. [57] using Improved Fisher Vectors (IFV) for texture description shows further improvement when combined with describable texture attributes learned on the Describable Textures Dataset (DTD) and with color attributes.
Recently, Cimpoi et al. [58, 59] pushed the state-of-the-art in texture recognition using a new encoder denoted as FV-CNN-VD, obtained by Fisher Vector pooling of a very deep convolutional neural network (CNN) filter bank pre-trained on ImageNet by Simonyan and Zisserman [60]. The CNN filter bank operates conventionally on preprocessed RGB images. This approach achieves state-of-the-art accuracy, yet due to the size of the very deep VGG networks it may not be suitable for real-time applications when evaluated without a high-performance graphics processing unit (GPU) for massive parallelization.
Deep convolutional neural networks
Deep convolutional neural networks (CNNs) succeeded in a number of computer vision tasks, especially those related to complex recognition and detection of objects with large databases of training images, such as the computer vision challenges ImageNet [61], Pascal VOC [62] and Common Objects in Context (COCO) [63]. Since the success of Krizhevsky's network [64] in the ImageNet 2012 Image Classification challenge, deep learning research leads to state-of-the-art results in such tasks. This was also the case of the PlantCLEF challenges [37, 38, 43], where the deep learning submissions [41, 42, 65, 66] outperformed combinations of hand-crafted methods significantly.
Recently, the very deep residual networks of He et al. [67] gained a lot of attention after achieving the best results in both the ILSVRC (ImageNet Large Scale Visual Recognition Challenge) 2015 and the COCO 2015 Detection Challenge. The residual learning framework allows to efficiently train networks that are substantially deeper than the previously used CNN architectures.
Szegedy et al. [68] study the ways to scale up networks efficiently by factorized convolutions and aggressive regularization. Their study is performed on Inception-style networks (i.e. networks with architectures similar to GoogleNet [69]), and propose the so called Inception v3 architecture. Furthermore, Szegedy et al. [70] show that training with residual connections accelerates the training of Inception networks significantly and that a residual Inception networks may outperform a similarly expensive Inception networks without residual connections by a thin margin.
Texture recognition approach to plant identification
Inspired by the textural nature of bark and leaf surfaces, we approach plant recognition as texture classification. In order to describe texture independently of the pattern size and orientation in the image, a description invariant to rotation and scale is needed. For practical applications we also demand computational efficiency.
We introduce novel texture description called Fast Features Invariant to Rotation and Scale of Texture (Ffirst), which combines several design choices to satisfy the given requirements. This method builds on and improves our texture descriptor for bark recognition [4].
Completed local binary pattern and histogram fourier features
The Ffirst description is based on the Local Binary Patterns [51, 52, 71]. The common LBP operator (later denoted as sign-LBP) locally computes the signs of differences between the center pixel and its P neighbours on a circle of radius R. With an image function f(x, y) and neighbourhood point coordinates \((x_p,y_p)\):
$$\begin{aligned} \begin{aligned} \text {LBP}_{P,R} (x,y)&= \sum \limits _{p=0}^{P-1} s( f(x,y) - f(x_p,y_p) ) 2^p , \; s(z)&=\left\{ \begin{array}{ll} 1 : &{} \text {if } z \le 0,\\ 0 : &{} \text {otherwise.} \end{array} \right. \end{aligned} \end{aligned}$$
To achieve rotation invariance,Footnote 2 we adopt the so called LBP histogram Fourier features (LBP-HF) introduced by Ahonen et al. [53]. LBP-HF describe the histogram of uniform patterns using coefficients of the discrete Fourier transform (DFT). Uniform LBP are patterns with at most 2 spatial transitions (bitwise 0-1 changes). Unlike the simple rotation invariants using \(\hbox {LBP}^\text {ri}\) [71, 72], which joins all uniform patterns with the same number of 1s into one bin, the LBP-HF features preserve the information about relative rotation of the patterns.
Denoting a uniform pattern \(U_p ^{n,r}\), where n is the "orbit" number corresponding to the number of "1" bits and r denotes the rotation of the pattern, the DFT for given n is expressed as:
$$\begin{aligned} H(n,u) = \sum \limits _{r=0}^{P-1} h_I\left( U_p^{n,r}\right) e^{-i2\pi u r /P} \,, \end{aligned}$$
where the histogram value \(h_I (U_p^{n,r})\) denotes the number of occurrences of a given uniform pattern in the image.
The LBP-HF features are equal to the absolute value of the DFT magnitudes, and thus are not influenced by the phase shift caused by rotation).
$$\begin{aligned} {LBP-HF}(n,u) = \vert H(n,u) \vert = =\sqrt{ H(n,u) \overline{H(n,u)}} . \end{aligned}$$
Since \(h_I\) are real, \(H(n,u) = H(n,P-u)\) for \(u = (1,\ldots ,P-1)\), and therefore only \(\left\lfloor {\frac{P}{2}}\right\rfloor +1\) of the DFT magnitudes are used for each set of uniform patterns with n "1" bits for \(0<n<P\). Three other bins are added to the resulting representation, namely two for the "1-uniform" patterns (with all bins of the same value) and one for all non-uniform patterns.
The LBP histogram Fourier features can be generalized to any set of uniform patterns. In Ffirst, the LBP-HF-S-M description [54] is used, where the histogram Fourier features of both sign- and magnitude-LBP are calculated to build the descriptor. The magnitude-LBP [73] checks if the magnitude of the difference of the neighbouring pixel \((x_p,y_p)\) against the central pixel (x, y) exceeds a threshold \(t_p\):
$$\begin{aligned} \text {LBP-M}_{P,R} (x,y) = \sum _{p=0}^{P-1} s( \vert f(x,y) - f(x_p,y_p) \vert - t_p) 2^p . \end{aligned}$$
We adopted the common practice of choosing the threshold value (for neighbours at p-th bit) as the mean value of all m absolute differences in the whole image:
$$\begin{aligned} t_p = \sum \limits _{i=1}^m \dfrac{ \vert f(x_i,y_i) - f(x_{ip},y_{ip}) \vert }{m}. \end{aligned}$$
The LBP-HF-S-M histogram is created by concatenating histograms of LBP-HF-S and LBP-HF-M (computed from uniform sign-LBP and magnitude-LBP).
Multi-scale description and scale invariance
A scale space is built by computing LBP-HF-S-M from circular neighbourhoods with exponentially growing radius R. Gaussian filtering is usedFootnote 3 to overcome noise.
Unlike the MS-LBP approach of Mäenpää and Pietikäinen [74], where the radii of the LBP operators are chosen so that the effective areas of different scales touch each other, Ffirst uses a finer scaling with a step of \(\sqrt{2}\) between scales radii \(R_i\), i.e. \(R_i = R_{i-1} \sqrt{2}\). This radius change is equivalent to decreasing the image area to one half. The first LBP radius used is \(R_1=1\), as the LBP with low radii capture important high frequency texture characteristics.
Similarly to [74], the filters are designed so that most of their mass lies within an effective area of radius \(r_i\). We select the effective area diameter, such that the effective areas at the same scale touch each other: \(r_i = R_i \sin \frac{\pi }{P}\).
LBP-HF-S-M histograms from c adjacent scales are concatenated into a single descriptor. Invariance to scale changes is increased by creating \(n_\text {conc}\) multi-scale descriptors for one image. See Fig. 1 for the overview of the texture description method.
Support Vector Machine and feature maps
In most applications, a Support Vector Machine (SVM) classifier with a suitable non-linear kernel provides higher recognition accuracy at the price of significantly higher time complexity and higher storage demands (dependent on the number of support vectors). An approach for efficient use of additive kernels via explicit feature maps is described by Vedaldi and Zisserman [75] and can be combined with a linear SVM classifier. Using linear SVMs on feature-mapped data improves the recognition accuracy, while preserving linear SVM advantages like fast evaluation and low storage (independent on the number of support vectors), which are both very practical in real time applications. In Ffirst we use the explicit feature map approximation of the histogram intersection kernel, although the \(\chi ^2\) kernel leads to similar results.
The "One versus All" classification scheme is used for multi-class classification, implementing the Platt's probabilistic output [76, 77] to ensure SVM results comparability among classes. The maximal posterior probability estimate over all scales is used to determine the resulting class.
In our experiments we use a stochastic dual coordinate ascent [78] linear SVM solver implemented in the VLFeat library [79].
Adding rotational invariants
The LBP-HF features used in the proposed Ffirst description are usually built from the DFT magnitudes of differently rotated uniform patterns. We propose to use all LBP instead of just the subset of uniform patterns. Note that in this case, some orbits have a lower number of patterns, since some non-uniform patterns show symmetries, as illustrated in Fig. 1.
The full set of local binary patterns divided into 36 orbits for the Histogram Fourier features. Patterns in one orbit only differ by rotation
Another rotational invariants are computed from the first DFT coefficients for each orbit:
$$\begin{aligned} \text {LBP-HF}^{+}(n) = \sqrt{ H(n,1) \overline{H(n+1,1)}} \end{aligned}$$
\(\hbox {Ffirst}^{\forall +}\) denotes the method using the full set of patterns for LBP-HF features and adding the additional LBP-\(\hbox {HF}^{+}\) features.
Recognition of segmented textural objects
We propose to extend Ffirst to segmented textural objects by treating the border and the interior of the object segment separately.
Let us consider a segmented object region \({\mathbb {A}}\). One may describe only points that have all neighbours at given scale inside \({\mathbb {A}}\). We show that describing a correctly segmented border, i.e. points in \({\mathbb {A}}\) with one or more neighbours outside \({\mathbb {A}}\) (see Fig. 2), adds additional discriminative information.
Segmentation of the leaf interior (blue) and border region (red) at different scales given by LBP radius R. The border region is defined as all points which have at least one neighbour (in \(\mathrm{LBP}_{P,R}\)) outside of the segmented region. a Original image, b Segmentation, R = 2.8, c Segmentation, R = 11.3
We experiment with 5 variants of the recognition method, differing in the processing of the border region:
\(\hbox {Ffirst}_\text {a}\) describes all pixels in \({\mathbb {A}}\) and maximizes the posterior probability estimate (i.e. SVM Platt's probabilistic output) over all \(n_\text {conc}\) scales.
\(\hbox {Ffirst}_\text {i}\) describes only the segment interior, i.e. pixels in \({\mathbb {A}}\) with all neighbours in \({\mathbb {A}}\).
\(\hbox {Ffirst}_\text {b}\) describes only the segment border, i.e. pixels in \({\mathbb {A}}\) with at least one neighbour outside \({\mathbb {A}}\).
\(\hbox {Ffirst}_{\text {ib}{\sum }}\) combines the \(\hbox {Ffirst}_\text {i}\) and \(\hbox {Ffirst}_\text {b}\) descriptors and maximizes the sum of their posterior probability estimates over \(n_\text {conc}\) scales.
\(\hbox {Ffirst}_{\text {ib}{\prod }}\) combines the \(\hbox {Ffirst}_\text {i}\) and \(\hbox {Ffirst}_\text {b}\) descriptors and maximizes the product of their posterior probability estimates over \(n_\text {conc}\) scales.
The leaf databases contain images of leaves on an almost white background. Segmentations were obtained by thresholding using the Otsu's method [80].
Deep learning approach to plant identification
For significantly more complex tasks—where the photos are nearly unconstrained (depicting different plant organs or the whole plant in its natural environment), with complex background, and much higher numbers of classes (10,000 in the case of LifeCLEF 2017 [81]), we choose a deep learning approach and utilize state-of-the-art deep convolutional neural networks, which succeeded in a number of computer vision tasks, especially those related to complex recognition and detection of objects. Given the enormous popularity of convolutional neural networks in the last years and the volume of available deep learning literature (e.g. [82,83,84]), we skip most of the deep learning theory and we only briefly describe our choices of architectures, models and techniques for our contributions to the PlantCLEF challenges.
In the experiments, we used the state-of-the-art CNN architectures as a baseline and added modifications described below: ensemble training with bagging, maxout, and bootstrapping for training on noisy labels. We initialized all convolutional layer parameters from networks pre-trained on the 1 million ImageNet images, and then fine-tuned the networks on the training data for the plant recognition task. Such initialization is a common practice that speeds up training and helps to avoid early overfitting on tasks with a small number of training images.
In deep learning challenges it is a common practice to train several networks on different (but not necessarily mutually exclusive) subsets of the training data. An ensemble of such networks, commonly combined by a simple voting mechanism (e.g. sum or maximum of class prediction scores), tends to outperform individual networks. In the PlantCLEF 2015 plant classification challenge, Choi [41] gained a significant margin in precision using bagging of 5 networks.
Maxout
Maxout [85] is based on an activation function, which takes a maximum over k parts (e.g. slices) of a network layer:
$$\begin{aligned} h_i(x)=\max _{j\in \left[ 1,k\right] } z_{ij} , \end{aligned}$$
where \(z_{ij} = {\mathbf {x}}^\text {T}{\mathbf {W}}_{..ij} + b_{ij}\) can be a standard fully connected (FC) layer with parameters \(W \in {\mathbb {R}}^{d\times m \times k}\), \(b \in {\mathbb {b}}^ {m \times k}\).
One can understand maxout as a piecewise linear approximation to a convex function, specified by the weights of the previous layer. Maxout was designed [85] to be combined with dropout [86].
The maxout is not used on top of the FC classification layer (which would mean increasing its size k-times), we add an additional FC layer with maxout activation before the classification FC layer.
In order to improve learning from noisy labels in the scenario of the PlantCLEF 2017 plant identification challenge, we experimented with the so called "bootstrapping" of Reed et. al. [87]. An objective is proposed that takes into account the current predictions of the network, with the intention to lower the effect of incorrect labels. Reed et al. propose two variants of the objective:
Soft bootstrapping uses the probabilities \(q_k\) given by the network (softmax):
$$\begin{aligned} { L }_\text {soft} ({\mathbf {q}},{\mathbf {t}}) = \sum _{k=1}^N \left[ \beta t_k + ( 1 - \beta ) q_k \right] \log q_k, \end{aligned}$$
where \(t_k\) are the provided labels and \(\beta\) is a parameter of the method. The authors [87] point out that the objective is equivalent to softmax regression with minimum entropy regularization, which was previously studied in [88]; encouraging high confidence in predicting labels.
Hard bootstrapping uses the strongest prediction \(z_k = {\left\{ \begin{array}{ll}1 \text { if } k=\text {argmax}q_i \\ 0 \text { otherwise}\end{array}\right. }\)
$$\begin{aligned} { L }_\text {hard} ({\mathbf {q}},{\mathbf {t}}) = \sum _{k=1}^N \left[ \beta t_k + ( 1 - \beta ) z_k \right] \log q_k \end{aligned}$$
We decided to follow the best performing setting of [87] and use hard booststrapping with \(\beta =0.8\) in our experiments. The search for the optimal value of \(\beta\) was omitted for computational reasons and limited time for the competition, yet the dependence between the amount of label noise and the optimal setting of hyperparameter \(\beta\) is a topic for future work.
ResNet with maxout for LifeCLEF 2016
In LifeCLEF 2016, we utilized the state-of-the-art very deep 152-layer residual network of He et al. [67]. The residual learning framework allows to efficiently train networks that are substantially deeper than the previously used CNN architectures. We used the model pre-trained on ImageNet which is publicly available [89] and inserted an additional fully connected layer sliced into 4 parts with 512 neurons each, and applied the maxout activation function on the slices. The parameters of both the new FC layer and the following 1000-way FC classification layer were initialized using the method of Glorot [90].
Thereafter, we fine-tuned the network for 150,000 iterations with the following parameters:
The learning rate was set to \(10^{-3}\) and lowered by a factor of 10 after every 100,000 iterations.
The momentum was set to 0.9, weight decay to \(2\cdot 10^{-4}\). r
The effective batch size was set to 28 (either computed at once on NVIDIA Titan X, or split into more batches using Caffe's iter_size parameter when used on GPUs with lower VRAM).
A horizontal mirroring of input images was performed during training.
Due to computational limits at training time, we only performed bagging of 3 networks, despite we expect that using a higher number of bagged networks would further improve the accuracy. For training the ensemble of networks, a different \(\frac{1}{3}\) of the training data was removed in each bag. The voting was done by taking species-wise maximum of output probabilities.
Inception-ResNet-v2 with maxout for LifeCLEF 2017
Our model for PlantCLEF 2017 was based on the state-of-the-art convolutional neural network architecture, the Inception-ResNet-v2 model [70], which introduced residual Inception blocks - a new type of the Inception block making use of the residual connections from [67]. Both the paper [70] and our preliminary experiments show that this network architecture leads to results superior to other state-of-the-art CNN architectures. The publicly available [91] Tensorflow model pretrained on ImageNet was used to initiate the parameters of convolutional layers. The main hyperparameters were set as follows:
Optimizer: RMSProp with momentum 0.9 and decay 0.9.
Weight decay: 0.00004.
Learning rate: Starting LR 0.01 with decay factor 0.94, exponential decay, ending LR 0.0001.
Batch size: 32.
We added a FC layer with 4096 units. The maxout activation operates over \(k=4\) linear pieces the FC layer, i.e. \(m=1024\). Dropout with a keep probability of 80% is applied before the FC layers. The final layer is a 10,000-way softmax classifier corresponding to the number of plant species needed in the 2017 task.
The PlantCLEF 2017 training data consists of 2 sets, both covering the same 10,000 plant species:
A "trusted" training set based on the online collaborative Encyclopedia Of Life (EoL), where the ground truth labels should be assigned correctly.
The "noisy" training set built using web crawlers (more precisely, the Google and Bing image search results) and may thus contain images which are not related to the declared plant species.
We fine-tuned our networks in three different ways:
Using only "trusted" (EoL) training data.
Using both "trusted" and "noisy" training data (EoL + web).
Filtering the "noisy" data using a model pretrained on the "trusted" data, and then fine-tuning on the combination of "trusted" and "filtered noisy" data (EoL + filtered web).
Datasets and evaluation methodology
Bark recognition is evaluated on a dataset collected by Österreichische Bundesforste—Austrian Federal Forests, which was introduced in 2010 by Fiel and Sablatnig [92] and contains 1182 bark images from 11 classes. We denote it as the Austrian Federal Forests (AFF) bark dataset.Footnote 4 The resolution of the images varies (between 0.4 and 8.0 Mpx). This dataset is not publicly available, but it was kindly provided by the Computer Vision Lab, TU Vienna, for academic purposes, with courtesy by Österreichische Bundesforste/Archiv.
Unlike in bark recognition, there is a number of existing datasets for leaf classification, most of them being publicly available. The datasets and their experimental settings are briefly described bellow:
The Austrian Federal Forest (AFF) leaf dataset was used by Fiel and Sablatnig [11] for recognition of trees, and was kindly provided together with the bark dataset described previously. It contains 134 photos of leaves of the 5 most common Austrian broad leaf trees. The leaves are placed on a white background. The results are compared using the protocol of Fiel and Sablatnig, i.e. using 8 training images per leaf class.
The Flavia leaf dataset contains 1907 images (1600 × 1200 px) of leaves from 32 plant species on white background, 50–77 images per class. The dataset was introduced by Wu et al. [17], who used 10 images per class for testing and the rest of the images for training. More recent publications use 10 randomly selected test images and 40 randomly selected training images per class, achieving better recognition accuracy even with the lower number of training samples. In the case of the two best result reported by Lee et al. [20, 21], the number of training samples is not clearly stated.Footnote 5 Some authors divide the set of images for each class into two halves, one for training and the other for testing.
The Foliage leaf dataset by Kadir et al. [19, 24] contains 60 classes of leaves from 58 species. The dataset is divided into a training set with 100 images per class and a test set with 20 images per class.
The Swedish leaf dataset was introduced in Söderkvist's diploma thesis [25] and contains images of leaves scanned using a 300 dpi colour scanner. There are 75 images for each of 15 tree classes. The standard evaluation scheme uses 25 images for training and the remaining 50 for testing. Note: The best reported result of Qi et al. [27] was found on the project homepage [29].
The Leafsnap dataset version 1.0 by Kumar et al. [12] was publicly released in 2014. It covers 185 tree species from the Northeastern United States. It contains 23147 high quality Lab images and 7719 Field images. The authors note that the released dataset does not exactly match that used to compute results for the paper, nor the currently running version on their servers, yet it seems to be similar to the dataset used in [12] and should allow at least a rough comparison. In the experiments of [12], leave-one-image-out species identification has been performed, using only the Field images as queries, matching against all other images in the recognition database. Probability of the correct match appearing among the top 5 results is taken as the resulting score. Note: The classification accuracy of [12] for the 1st result in Table 2 is estimated from a plot in [12]. Because leave-one-image-out testing scheme would demand to re-train our classifiers for each tested image, we rather perform 10-fold cross validation, i.e. divide the set of Fields images into 10 parts, testing each part on classifiers learned using the set of other parts together with the Lab images.
The Middle European Woods (MEW) dataset was introduced by Novotný and Suk [22]. It contains 300 dpi scans of leaves belonging to 153 classes (from 151 botanical species) of Central European trees and shrubs. There are 9745 samples in total, at least 50 per class. The experiments are performed using half of the images in each class for training and the other half for testing.
The PlantCLEF challenge datasets depict plants in a significantly wider range of views, such as leaves, flowers, fruits, stems, entire plants and branches.
In the plant identification challenge PlantCLEF 2016, the training set contained 113,205 images of 1000 species of herbs, trees and ferns, and included also other meta-data, such as the type of view (fruit, flower, entire plant, etc.), observation ID and GPS coordinates (if available). The test set contained 8000 pictures, including "distractor" images which did not depict one of the 1000 species.
In the PlantCLEF 2017 challenge, there were two training sets available: a "trusted" set of 256,287 thousand labelled images of 10,000 plant species with meta-data, and a "noisy" set with URLs to more than 1.4 million weakly-labelled web images obtained by Google and Bing image search. The evaluation of the task was performed on a test set containing 25,170 images of 13,471 observations (specimen). There are no "distractor" images in the 2017 test set.
While PlantCLEF 2016 challenge was evaluated based on the mean Average Precision (mAP), PlantCLEF 2017 used a less common measure—the mean reciprocal rank (MRR):
$$\begin{aligned} \mathrm{MRR} = \dfrac{1}{\vert Q \vert }\sum ^{\vert Q \vert }_{i=1}\dfrac{1}{\text {rank}_i}, \end{aligned}$$
where \(\vert Q \vert\) is the total number of queries in the test set and \(\text {rank}_i\) is the rank of the correct result for the i-th query.
Tree bark classification
Results of our texture recognition approach to tree bark classification on the Austrian Federal Forest bark dataset are compared with the best published results in Table 1. Note that the MS-LBP method assumes the orientation is fixed, which seems to be a useful assumption in the case of this dataset. However, unlike Ffirst, it doesn't provide rotation invariance. Because the bark dataset is very small, we skip experiments with CNNs, which need a considerably higher amount of data for the standard training/fine-tuning procedures.
Table 1 Bark classification results of Ffirst and the state-of-the-art methods
Leaf classification
Application of the proposed fast features invariant to rotation and scale of texture to identification of leaves [5] lead to excellent results on standard leaf recognition datasets, proposing a novel approach to visual leaf identification: a leaf is represented by a pair of local feature histograms, one computed from the leaf interior, the other from the border, see Fig. 2. This description utilizing Ffirst outperforms the state-of-the-art on all tested leaf datasets—the Austrian Federal Forests dataset, the Flavia dataset, the Foliage dataset, the Swedish dataset and the Middle European Woods dataset—achieving excellent recognition rates above 99%. Updated results of our leaf recognition method originally published in [5] are in Table 2.
Leaf classification with deep convolutional neural networks is hard to apply to experiment with small leaf datasets. To get a comparison with our textural method, we performed our experiment on the Middle European Woods dataset, fine-tuning from an ImageNet-pretrained model. Note that due to high computational complexity and limited GPU resources, we only evaluated this method on one random data split (in both directions), while Ffirst was evaluated on 10 random splits. After 200,000 steps, the Inception-ResNet-v2 network with maxout outperforms previous results significantly, achieving 99.9 and 100.0% accuracy respectively. Moreover, the correct class always appears among the top 5 predictions.
Table 2 Evaluation of Ffirst on available leaf datasets: Austrian Federal Forests, Flavia, Foliage, Swedish, Middle European Woods and Leafsnap
PlantCLEF plant identification challenges
In the PlantCLEF 2016 plant identification challenge, our main submission [8] using bagging of our three residual networks with maxout achieved 71.0% mAP (mean average precision), placing us among the top 3 teams in the challenge, where the winning submission achieved 74.2% mAP. Our deep network was actually more precise for single image labelling than the winning submission [39], which pushed the mAP from 61.1 to 74.2% by utilizing the ObservationID meta-information and summing the scores over all images in an observation. Our post-challenge experiments show that summing the scores over observations would boost our system to 78.8% mAP on the PlantCLEF 2016 test data.
For PlantCLEF 2017, we fine-tuned our deep networks on the "trusted" (EoL) data only, as well as on the combination of both "trusted" and "noisy" data (EoL + web). We also experimented with the bootstrapping technique for training with "noisy" data. In experiments on our validation set (based on 2016 test data) the networks trained only on the "trusted" data performed slightly better. The two best performing networks trained on the "trusted" (EoL) dataset, each achieving 65% accuracy on the validation set, were then used in the following experiments.
Net #1: Fine-tuned on "trusted" (EoL) set without maxout for 200k it.
Net #2: Fine-tuned on "trusted" (EoL) set with maxout for 200k it.
A "filtered noisy" training set of 425k images was acquire from the noisy set by keeping only images where the prediction of Net #1 was equal to the label.
In order to train ensembles with bagging, we divided the data into 3 disjoint folds. Then the following networks were further fine-tuned on different 2 of the 3 folds for 50,000 iterations.
Net #3, #4, #5 Fine-tuned from Net #1 for 50k it. on the "trusted" dataset.
Net #6, #7, #8 Fine-tuned from Net #2 for 50k it. on the "trusted" dataset, with maxout.
Net #9, #10, #11 Fine-tuned from Net #1 for 50k it. on the "trusted" and "filtered noisy" data.
Net #12, #13, #14 Fine-tuned from Net #1 for 50k it. on the "trusted" and "filtered noisy" data, with hard bootstrapping.
Net #15,#16,#17 Fine-tuned from Net #2 for 50k it. on the "trusted" and "filtered noisy" data, with maxout.
The individual fine-tuned networks did not achieve much improvement compared to networks #1 and #2: the accuracies ranged from 57 to 67% on the validation set. However combinations of the differently fine-tuned networks are beneficial: an ensemble of all 17 networks achieved final validation accuracy 73%, and as our submission to PlantCLEF 2017 ranked 3rd with Mean Reciprocal Rank 84.3%.
The accuracy of Ffirst is suitable for practical applications in leaf and bark recognition, exceeding 99% for most leaf datasets. The method is computationally efficient and fast: processing 200 × 200 pixel images takes about 0.05 s on a laptop without using a GPU. That makes real-time processing on common handheld devices (such as low-end smartphones) feasible. The drawback of such global texture descriptor is its dependence on perfect segmentation of the area of interest, which makes it unsuitable for more complex pictures of plants. In the case where the whole image area contains bark texture, no segmentation is needed. For leaf scans or photographs of leaves on a white background, segmentation is trivial and all information is visible in the image. For more complex cases, such as unconstrained plant recognition "in the wild" including occlusions, complex background and highly variable image content, a more generalizing model is needed.
The generality and higher capacity of CNNs is suitable for such more complex tasks. With large amounts of training data, state-of-the-art convolutional neural network architectures achieve the best results on such tasks, as validated by results of the recent PlantCLEF challenges [38, 43].
CNN models usually need a very high amount of training data for training. This need can be partially reduced by initializing the model variables from a pre-trained model (usually on ImageNet). An experiment with the modified state-of-the-art Inception-ResNet-v2 network shows that with sufficient training data, fine-tuning a deep convolutional neural network leads to almost perfect leaf classification, achieving at least 99.9% accuracy on the MEW leaf dataset. Although this leaf dataset represents a considerable number of classes (153), it is still much lower than in the case of PlantCLEF challenges (10,000 species in 2017). There is a lack of larger bark datasets for similar experiments. It is common for the more constrained tasks, that many of the publicly available datasets are rather small in the number of classes and images - the AFF datasets are a great example. This dataset size variance has to be taken into account when interpreting the achieved accuracy: for example, Ffirst achieves 100 % accuracy on the AFF leaf dataset, which only contains 5 plant species, while the 99.5% accuracy on the MEW daraset with 153 classes is definitely more informative. Besides dataset size, we also noticed a significant effect of segmentation errors on the performance in the case of the Leafsnap dataset.
The disadvantage of common CNNs are high hardware demands for training the models and for real-time processing—in practice, this is achieved by massive parallelization on GPUs or other deep-learning-specialized hardware units, such as the recently introduced Tensor Processor Units. From the network design point of view, the processing speed might be increased by quantization and pruning, but also using smaller models, such as MobileNets [93]. All of these methods, however, tend to decrease the model accuracy.
We observe that building an ensemble of such networks improves accuracy significantly by combining the expertise learned by several models converging into different local minima. We believe that this raises an interesting question for future research: How to combine ensembles of such models in a more efficient way?
Identification of plant species from pictures of bark and leaves using textural recognition with the proposed Ffirst method leads to state-of-the-art results, while keeping computational demands small, which makes it suitable for real-time processing. Our experiment shows that with enough training data, an even better accuracy can be achieved using a convolutional neural network, performing leaf classification almost perfectly with 99.9–100.0% accuracy on the MEW dataset with 153 plant species.
The results suggest that with sufficient amount of training data, recognition of segmented leaves is practically a solved problem. Learning from a small number of samples may be still a valid problem and may be practical for uncommon plant species or rare phenotypes.
The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild", where the views on plant organs or plants vary significantly and suffer from occlusions and background clutter. That was demonstrated by the results of the recent PlantCLEF challenges [38, 43], where the proposed deep learning methods performed competitively, finishing among the top 3 teams in both 2016 and 2017.
http://leafsnap.com/.
LBP-HF (as well as \(\hbox {LBP}^{ri}\)) are rotation invariant only in the sense of a circular bit-wise shift, e.g. rotation by multiples \(22.5^{\circ }\) for \(\hbox {LBP}_{16,R}\).
The Gaussian filtering is used for a scale i only if \(\sigma _i > 0.6\), as filtering with lower \(\sigma _i\) leads to significant loss of information.
The Computer Vision Lab, TU Vienna, kindly made the dataset available to us for academic purposes, with courtesy by Österreichische Bundesforste/Archiv.
In [20], the result presented as "95.44% (1820 / 1907)" seems to be tested on all images.
AFF:
Austrian Federal Forest (dataset)
CNN:
COCO:
common objects in context (dataset, challenge)
DFT:
EoL:
encyclopedia of life (web encyclopedia), http://eol.org/
FC:
fully connected (layer)
Ffirst:
fast features invariant to rotation and scale of texture
LBP:
Local Binary Patterns
mean average precision
MEW:
Middle European Woods (dataset)
SIFT:
Scale Invariant Feature Transform
SVM:
Support Vector Machine
Linnaeus C. Systema naturae: per regna tria naturae, secundum classes, ordines, genera, species, cum characteribus, differentiis, synonymis, locis, vol. 2. 10th ed. Laurentius Salvius; 1759.
Stearn WT. The background of Linnaeus's contributions to the nomenclature and methods of systematic biology. Syst Zool. 1959;8(1):4–22.
Chapman AD, et al. Numbers of living species in Australia and the world. Canberra: Department of the Environment, Water, Heritage and the Arts; 2009.
Šulc M, Matas J. Kernel-mapped histograms of multi-scale LBPs for tree bark recognition. In: 2013 28th International conference of image and vision computing New Zealand (IVCNZ); 2013. p. 82–87.
Šulc M, Matas J. Texture-based leaf identification. In: Agapito L, Bronstein MM, Rother C, editors. Computer vision—ECCV 2014 workshops, Part IV. volume 8928 of LNCS. Cham: Springer International Publishing AG; 2015. p. 181–96.
Šulc M, Matas J. Fast features invariant to rotation and scale of texture. In: Agapito L, Bronstein MM, Rother C, editors. Computer vision—ECCV 2014 workshops, Part II. volume 8926 of LNCS. Gewerbestrasse 11, CH-6330 Cham (ZG), Switzerland: Springer International Publishing AG; 2015. p. 47–62.
Šulc M, Mishkin D, Matas J. Very deep residual networks with maxout for plant identification in the wild. In: Working notes of CLEF 2016—conference and labs of the evaluation forum; 2016.
Šulc M, Matas J. Learning with noisy and trusted labels for fine-grained plant recognition. In: Working notes of CLEF 2017—conference and labs of the evaluation forum; 2017.
Belhumeur PN, Chen D, Feiner S, Jacobs DW, Kress WJ, Ling H, et al. Searching the world's herbaria: a system for visual identification of plant species. In: Computer vision–ECCV 2008. Springer; 2008. p. 116–29.
Nilsback ME, Zisserman A. An automatic visual flora: segmentation and classification of flower images. Oxford: Oxford University; 2009.
Fiel S, Sablatnig R. Automated identification of tree species from images of the bark, leaves and needles. In: Proceedings of 16th computer vision winter workshop. Mitterberg, Austria; 2011. p. 1–6.
Kumar N, Belhumeur PN, Biswas A, Jacobs DW, Kress WJ, Lopez IC, et al. Leafsnap: a computer vision system for automatic plant species identification. In: Computer vision–ECCV 2012. Springer; 2012. p. 502–16.
Barthélémy D, Boujemaa N, Mathieu D, Molino JF, Bonnet P, Enficiaud R, et al. The Pl@ntNet project: a computational plant identification and collaborative information system. Tech. Rep., XIII World forestry congress; 2009.
Joly A, Goëau H, Glotin H, Spampinato C, Bonnet P, Vellinga WP, et al. LifeCLEF 2016: multimedia life species identification challenges. In: Proceedings of CLEF 2016; 2016.
Kadir A, Nugroho LE, Susanto A, Santosa PI. A comparative experiment of several shape methods in recognizing plants. Int J Comput Sci Inf Technol. 2011;3(3):256–63.
Agarwal G, Belhumeur P, Feiner S, Jacobs D, Kress WJ, Ramamoorthi R, et al. First steps toward an electronic field guide for plants. Taxon. 2006;55(3):597–610.
Wu SG, Bao FS, Xu EY, Wang YX, Chang YF, Xiang QL. A leaf recognition algorithm for plant classification using probabilistic neural network. In: 2007 IEEE international symposium on signal processing and information technology. IEEE; 2007. p. 11–16.
Kadir A, Nugroho LE, Susanto A, Santosa PI. Performance improvement of leaf identification system using principal component analysis. Int J Adv Sci Technol. 2012;44:113–24.
Kadir A, Nugroho LE, Susanto A, Santosa PI. Experiments of Zernike moments for leaf identification. J Theor Appl Inf Technol. 2012;41(1):82–93.
Lee KB, Hong KS. Advanced leaf recognition based on leaf contour and centroid for plant classification. In: The 2012 international conference on information science and technology; 2012. p. 133–35.
Lee K-B, Chung K-W, Hong K-S. An implementation of leaf recognition system. In: Proceedings of the 7th international conference on information security and assurance 2013, ASTL vol. 21; 2013. p. 152–5.
Novotný P, Suk T. Leaf recognition of woody species in Central Europe. Biosyst Eng. 2013;115(4):444–52.
Karuna G, Sujatha B, Giet R, Reddy PC. An efficient representation of shape for object recognition and classification using circular shift method. Int J Sci Eng Res. 2013;4(12):703–7.
Kadir A, Nugroho LE, Susanto A, Santosa PI. Neural network application on foliage plant identification. Int J Comput Appl. 2011;29:15–22.
Söderkvist O. Computer vision classification of leaves from swedish trees. Master thesis. Linköping University; 2001.
Wu J, Rehg JM. CENTRIST: a visual descriptor for scene categorization. IEEE Trans Pattern Anal Mach Intell. 2011;33(8):1489–501.
Qi X, Xiao R, Guo J, Zhang L. Pairwise rotation invariant co-occurrence local binary pattern. In: Computer vision—ECCV 2012. Springer; 2012. p. 158–71.
Lowe DG. Object recognition from local scale-invariant features. In: The proceedings of the seventh IEEE international conference on computer vision, 1999, vol. 2. IEEE; 1999. p. 1150–57.
Pairwise rotation invariant co-occurrence local binary pattern. Available from: http://qixianbiao.github.io. Accessed 14 Dec 2017.
Pydipati R, Burks T, Lee W. Identification of citrus disease using color texture features and discriminant analysis. Comput Electron Agric. 2006;52(1):49–59.
Chi Z, Houqiang L, Chao W. Plant species recognition based on bark patterns using novel Gabor filter banks. In: Proceedings of ICNNSP, vol. 2; 2003.
Wan YY, Du JX, Huang DS, Chi Z, Cheung YM, Wang XF, et al. Bark texture feature extraction based on statistical texture analysis. In: Proceedings of ISIMP; 2004.
Song J, Chi Z, Liu J, Fu H. Bark classification by combining grayscale and binary texture features. In: Proceedings of ISIMP; 2004.
Huang ZK, Zheng CH, Du JX, Wan Y. Bark classification based on textural features using artificial neural networks. In: Wang J, Yi Z, Zurada JM, Lu BL, Yin H, editors. Advances in neural networks—ISNN 2006. Lecture Notes in Computer Science, vol. 3972. Berlin: Springer; 2006.
Boudra S, Yahiaoui I, Behloul A. A comparison of multi-scale local binary pattern variants for bark image retrieval. In: International conference on advanced concepts for intelligent vision systems. Springer; 2015. p. 764–75.
Goëau H, Joly A, Bonnet P, Selmi S, Molino JF, Barthélémy D, et al. Lifeclef plant identification task 2014. In: CLEF2014 Working notes. Working notes for CLEF 2014 conference, Sheffield, UK, September 15-18, 2014. CEUR-WS; 2014. p. 598–615.
Joly A, Goëau H, Glotin H, Spampinato C, Bonnet P, Vellinga WP, et al. LifeCLEF 2015: multimedia life species identification challenges. In: International conference of the cross-language evaluation forum for european languages. Springer; 2015. p. 462–83.
Goëau H, Bonnet P, Joly A. Plant identification in an open-world (LifeCLEF 2016). In: Working notes of CLEF 2016—conference and labs of the evaluation forum; 2016.
Hang ST, Tatsuma A, Aono M. Bluefield (KDE TUT) at LifeCLEF 2016 plant identification task. In: Working notes of CLEF 2016—conference and labs of the evaluation forum; 2016.
Ghazi MM, Yanikoglu B, Aptoula E. Open-set plant identification using an ensemble of deep convolutional neural networks. In: Working notes of CLEF 2016—conference and labs of the evaluation forum; 2016.
Choi S. Plant identification with deep convolutional neural network: SNUMedinfo at LifeCLEF plant identification task 2015. In: Working notes of CLEF 2015—conference and labs of the evaluation forum, Toulouse, France, September 8–11, 2015. CEUR-WS; 2015.
Ge Z, McCool C, Sanderson C, Corke P. Content specific feature learning for fine-grained plant classification. In: Working notes of CLEF 2015—conference and labs of the evaluation forum, Toulouse, France, September 8–11, 2015. CEUR-WS; 2015.
Goëau H, Bonnet P, Joly A. Plant identification based on noisy web data: the amazing performance of deep learning (LifeCLEF 2017). In: CLEF working notes 2017; 2017.
Pl@ntNet-identify web service. Available from: http://identify.plantnet-project.org/en/. Accessed 14 Dec 2017.
Goëau H, Bonnet P, Joly A, Bakić V, Barbe J, Yahiaoui I, et al. Pl@ ntnet mobile app. In: Proceedings of the 21st acm international conference on multimedia. ACM; 2013. p. 423–24.
Zhang J, Tan T. Brief review of invariant texture analysis methods. Pattern Recogn. 2002;35(3):735–47.
Mirmehdi M, Xie X, Suri J. Handbook of texture analysis. London: Imperial College Press; 2009.
Chen Ch, Pau LF, Wang PSp. Handbook of pattern recognition and computer vision. Singapore: World Scientific; 2010.
Pietikäinen M. Texture recognition. In: Ikeuchi K, editor. Computer vision: a reference guide. Springer; 2014. p. 789–93.
Hawkins JK. Textural properties for pattern recognition. In: Lipkin BS, editor. Picture processing and psychopictorics. Elsevier; 1970. p. 347–70.
Ojala T, Pietikainen M, Harwood D. Performance evaluation of texture measures with classification based on Kullback discrimination of distributions. In: Proceedings of IAPR 1994, vol. 1; 1994. p. 582–85.
Ojala T, Pietikäinen M, Harwood D. A comparative study of texture measures with classification based on featured distributions. Pattern Recogn. 1996;29(1):51–9.
Ahonen T, Matas J, He C, Pietikäinen M. Rotation invariant image description with local binary pattern histogram Fourier features. In: Proceedings of SCIA '09, Springer-Verlag; 2009. p. 61–70.
Zhao G, Ahonen T, Matas J, Pietikainen M. Rotation-invariant image and video description with local binary pattern features. IEEE Trans Image Process. 2012;21(4):1465–77.
Sifre L, Mallat S. Rotation, scaling and deformation invariant scattering for texture discrimination. In: 2013 IEEE Conference on computer vision and pattern recognition (CVPR). IEEE; 2013. p. 1233–40.
Mao J, Zhu J, Yuille AL. An active patch model for real world texture and appearance classification. In: Computer vision–ECCV 2014. Springer; 2014. p. 140–55.
Cimpoi M, Maji S, Kokkinos I, Mohamed S, Vedaldi A. Describing textures in the Wild. 2013. arXiv preprint arXiv:1311.3618.
Cimpoi M, Maji S, Vedaldi A. Deep filter banks for texture recognition and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. p. 3828–36.
Cimpoi M, Maji S, Kokkinos I, Vedaldi A. Deep filter banks for texture recognition, description, and segmentation. 2015. arXiv preprint arXiv:1507.02620.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. arXiv preprint arXiv:1409.1556.
Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. Imagenet: A large-scale hierarchical image database. In: IEEE Conference on computer vision and pattern recognition, 2009, CVPR 2009. IEEE; 2009. p. 248–55.
Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A. The pascal visual object classes (voc) challenge. Int J Comput Vision. 2010;88(2):303–38.
Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, et al. Microsoft coco: common objects in context. In: European conference on computer vision. Springer; 2014. p. 740–55.
Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems; 2012. p. 1097–105.
Champ J, Lorieul T, Servajean M, Joly A. A comparative study of fine-grained classification methods in the context of the LifeCLEF plant identification challenge 2015. In: Working notes of CLEF 2015—conference and labs of the evaluation forum, Toulouse, France, September 8–11, 2015. CEUR-WS; 2015.
Reyes AK, Caicedo JC, Camargo JE. Fine-tuning deep convolutional networks for plant recognition. In: Working notes of CLEF 2015—conference and labs of the evaluation forum, Toulouse, France, September 8–11, 2015. CEUR-WS; 2015.
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. 2015. arXiv preprint arXiv:1512.03385.
Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. 2015. arXiv preprint arXiv:1512.00567.
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2015. p. 1–9.
Szegedy C, Ioffe S, Vanhoucke V. Inception-v4, inception-resnet and the impact of residual connections on learning. 2016. arXiv preprint arXiv:1602.07261.
Ojala T, Pietikainen M, Maenpaa T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. PAMI. 2002;24(7):971–87.
Pietikäinen M, Ojala T, Xu Z. Rotation-invariant texture classification using feature distributions. Pattern Recogn. 2000;33(1):43–52.
Guo Z, Zhang D. A completed modeling of local binary pattern operator for texture classification. IEEE Trans Image Process. 2010;19(6):1657–63.
Mäenpää T, Pietikäinen M. Multi-scale binary patterns for texture analysis. In: Bigun J, Gustavsson T, editors. Image analysis. SCIA 2003. Lecture Notes in Computer Science, vol. 2749. Berlin: Springer; 2003. p. 885–92.
Vedaldi A, Zisserman A. Efficient additive kernels via explicit feature maps. PAMI. 2011;34(3):480–92.
Platt J. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Adv Large Margin Classif. 1999;10(3):61–74.
Lin HT, Lin CJ, Weng RC. A note on Platt's probabilistic outputs for support vector machines. Mach Learn. 2007;68(3):267–76.
Shalev-Shwartz S, Zhang T. Stochastic dual coordinate ascent methods for regularized loss minimization. 2012. arXiv preprint arXiv:1209.1873.
Vedaldi A, Fulkerson B. VLFeat. An open and portable library of computer vision algorithms. 2008. http://www.vlfeat.org/. Accessed 14 Dec 2017.
Otsu N. A threshold selection method from gray-level histograms. Automatica. 1975;11(285–296):23–7.
Joly A, Goëau H, Glotin H, Spampinato C, Bonnet P, Vellinga WP, et al. LifeCLEF 2017 lab overview: multimedia species identification challenges. In: Proceedings of CLEF 2017; 2017.
LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44.
Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw. 2015;61:85–117.
Goodfellow I, Bengio Y, Courville A. Deep learning. Cambridge: MIT Press; 2016.
Goodfellow IJ, Warde-Farley D, Mirza M, Courville A, Bengio Y. Maxout networks. 2013. arXiv preprint arXiv:1302.4389.
Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15(1):1929–58.
Reed S, Lee H, Anguelov D, Szegedy C, Erhan D, Rabinovich A. Training deep neural networks on noisy labels with bootstrapping. 2014. arXiv preprint arXiv:1412.6596.
Grandvalet Y, Bengio Y. Entropy regularization. In: Chapelle O, Scholkopf B, Zien A, editors. Semi-supervised learning. MIT Press; 2006. p. 151–68. http://mitpress.universitypressscholarship.com/view/10.7551/mitpress/9780262033589.001.0001/upso-9780262033589-chapter-9.
Pretrained models for Deep Residual Networks. Available from: https://github.com/KaimingHe/deep-residual-networks#models. Accessed 14 Dec 2017.
Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: International conference on artificial intelligence and statistics; 2010. p. 249–56.
Pretrained Tensorflow models. Available from: https://github.com/tensorflow/models/tree/master/research/slim#Pretrained. Accessed 14 Dec 2017.
Fiel S, Sablatnig R. Automated identification of tree species from images of the bark, leaves and needles [Master Thesis]. Vienna University of Technology; 2010.
Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, et al. Mobilenets: efficient convolutional neural networks for mobile vision applications. 2017. arXiv preprint arXiv:1704.04861.
MŠ and JM proposed the methodology and experiments, and contributed to writing the manuscript. MŠ implemented the proposed methods and conducted the experiments. Both authors analysed the data. Both authors read and approved the final manuscript.
Milan Šulc was supported by the CTU student Grant SGS17/185/OHK3/3T/13. Jiří Matas was supported by The Czech Science Foundation Project GACR P103/12/G084.
PlantCLEF 2016 http://www.imageclef.org/lifeclef/2016/plant/. PlantCLEF 2017 http://www.imageclef.org/lifeclef/2017/plant/. Flaiva http://flavia.sourceforge.net/. Swedish http://www.cvl.isy.liu.se/en/research/datasets/swedish-leaf/. Leafsnap http://leafsnap.com/dataset/. MEW http://zoi.utia.cas.cz/tree_leaves/. Foliage http://rnd.akakom.ac.id/foliage/ (Not available at the time of submisison.),. Foliage (mirror) http://cmp.felk.cvut.cz/ sulcmila/datasets/foliage_mirror/. AFF leaf and bark Not available online, contact authors of [11].
Department of Cybernetics, FEE CTU in Prague, Karlovo namesti 13, 121 35, Prague 2, Czech Republic
Milan Šulc & Jiří Matas
Jiří Matas
Correspondence to Milan Šulc.
Šulc, M., Matas, J. Fine-grained recognition of plants from images. Plant Methods 13, 115 (2017). https://doi.org/10.1186/s13007-017-0265-4
Kernel maps
Plants in computer vision | CommonCrawl |
\begin{document}
\frontmatter
\newgeometry{margin=3cm,centering}
\begin{titlepage} \thispagestyle{empty} \begin{center}
\vspace*{-1cm}
\includegraphics[trim=0 0.2cm 0 0, height=1.8cm,keepaspectratio=true]{./sns-logo}
\includegraphics[trim=0 -0.04cm 0 0, height=1.4cm,keepaspectratio=true]{./lpma-logo}
\includegraphics[trim=0 0.8cm 0 -2cm, height=2.5cm,keepaspectratio=true]{./upmc-logo}
\textsc{\LARGE Scuola Normale Superiore di Pisa}\\ {\it Classe di Scienze}\\[0.3cm]
\textsc{\LARGE Universit\'e Pierre et Marie Curie}\\ {\'Ecole Doctorale de Sciences Math\'ematiques de Paris Centre}\\ {\it Laboratoire de Probabilit\'es et Mod\`eles Al\'eatoires}
\HRule
{\Large \bf Pathwise functional calculus\\and applications to continuous-time finance\\[0.5cm] Calcul fonctionnel non-anticipatif\\et application en finance}
\HRule
{\large Candia Riga\\ Tesi di perfezionamento in Matematica per la Finanza\\ Th\`ese de doctorat de Math\'ematiques\\[0.3cm] Dirig\'e par Rama Cont, co-dirig\'e par Sara Biagini\\%[0.3cm] }
Rapporteurs: Hans F\"ollmer et St\'ephane Cr\'epey
Pr\'esent\'ee et soutenue publiquement le 26/06/2015, devant un jury compos\'e de:
\begin{tabular}{lll} Ambrosio Luigi&Scuola Normale Superiore di Pisa&Examinateur\\ Biagini Sara&Universit\`a di Pisa&Co-Directeur\\ Cont Rama&Universit\'e Pierre et Marie Curie&Directeur\\ Cr\'epey St\'ephane&Universit\'e d'Evry&Rapporteur\\ Marmi Stefano&Scuola Normale Superiore di Pisa&Examinateur\\ Tankov Peter &Universit\'e Denis Diderot&Examinateur \end{tabular}
\end{center} \end{titlepage}
\restoregeometry
\chapter*{Abstract}
\markboth{Abstract}{}
This thesis develops a mathematical framework for the analysis of continuous\hyp time trading strategies which, in contrast to the classical setting of continuous-time finance, does not rely on stochastic integrals or other probabilistic notions.
Using the recently developed \lq non-anticipative functional calculus\rq, we first develop a pathwise definition of the gain process for a large class of continuous-time trading strategies which include the important class of delta-hedging strategies, as well as a pathwise definition of the self-financing condition.
Using these concepts, we propose a framework for analyzing the performance and robustness of delta-hedging strategies for path-dependent derivatives across a given set of scenarios. Our setting allows for general path-dependent payoffs and does not require any probabilistic assumption on the dynamics of the underlying asset, thereby extending previous results on robustness of hedging strategies in the setting of diffusion models. We obtain a pathwise formula for the hedging error for a general path-dependent derivative and provide sufficient conditions ensuring the robustness of the delta hedge. We show in particular that robust hedges may be obtained in a large class of continuous exponential martingale models under a vertical convexity condition on the payoff functional. Under the same conditions, we show that discontinuities in the underlying asset always deteriorate the hedging performance. These results are applied to the case of Asian options and barrier options.
The last chapter, independent of the rest of the thesis, proposes a novel method, jointly developed with Andrea Pascucci and Stefano Pagliarani, for analytical approximations in local volatility models with L\'evy jumps. The main result is an expansion of the characteristic function in a local L\'evy model, which is worked out in the Fourier space by consid\'ering the adjoint formulation of the pricing problem. Combined with standard Fourier methods, our result provides efficient and accurate pricing formulae. In the case of Gaussian jumps, we also derive an explicit approximation of the transition density of the underlying process by a heat kernel expansion; the approximation is obtained in two ways: using PIDE techniques and working in the Fourier space. Numerical tests confirm the effectiveness of the method.
\chapter*{Sommario} \markboth{Sommario}{}
Questa tesi sviluppa un approccio \lq per traiettorie\rq\ alla modellizzazione dei mercati finanziari in tempo continuo, senza fare ricorso a delle ipotesi probabilistiche o a dei modelli stocastici. Lo strumento principale utilizzato in questa tesi \`e il calcolo funzionale non-anticipativo, una teoria analitica che sostituisce il calcolo stocastico solitamente utilizzato in finanza matematica.
Cominciamo nel Capitolo 1 introducendo la teoria di base del calcolo funzionale non-anticipativo e i suoi principali risultati che utilizzeremo nel corso della tesi. Il Capitolo 2 mostra in dettaglio la versione probabilistica di tale calcolo, soprannominata \emph{Calcolo di \ito\ funzionale}, e mostra come essa permetta di estendere i risultati classici sulla valutazione e la replicazione dei derivati finanziari al caso di opzioni dipendenti dalla traiettoria dei prezzi. Inoltre illustriamo la relazione tra le equazioni alle derivate parziali con coefficienti dipendenti dal cammino e le equazioni differenziali stocastiche \lq backward\rq. Infine prendiamo in consid\'erazione altre nozioni deboli di soluzione a tali equazioni alle derivate parziali dipendenti dal cammino, utilizzate nella letteratura nel caso in cui non esistano soluzioni classiche.
In seguito, nel Capitolo 3, costruiamo un modello di mercato finanziario in tempo continuo, senza ipotesi probabilistiche e con un orizzonte temporale finito, dove i tempi di transazione sono rappresentati da una sequenza crescente di partizioni temporali, il cui passo converge a 0. Identifichiamo le traiettorie \lq plausibili\rq\ con quelle che possiedono una variazione quadratica finita, nel senso di F\"ollmer, lungo tale sequenza di partizioni. Tale condizione di plausibilit\`a sull'insieme dei cammini ammissibili rispetta il punto di vista delle condizioni \lq per traiettorie\rq\ di non-arbitraggio.
Completiamo il quadro introducendo una nozione \lq per traiettorie\rq\ di strategie auto-finanzianti su un insieme di traiettorie di prezzi. Queste strategie sono definite come limite di strategie semplici e auto-finanzianti, i cui tempi di transizione appartengono alla sequenza di partizioni temporali fissata. Identifichiamo una classe speciale di strategie di trading che dimostriamo essere auto-finanzianti e il cui guadagno pu\`o essere calcolato traiettoria per traiettoria come limite di somme di Riemann. Inoltre, presentiamo un risultato di replicazione per traiettorie e una formula analitica esplicita per stimare l'errore di replicazione. Infine, definiamo una famiglia di operatori integrali indicizzati sui cammini come delle isometrie tra spazi normati completi.
Il Capitolo 4 utilizza questo quadro teorico per proporre un'analisi per traiettorie delle strategie di replicazione dinamica. Ci interessiamo in particolare alla robustezza della loro performance nel caso della replicazione di derivati dipendenti dalla traiettoria dei prezzi e monitorati in tempo continuo. Supponiamo che l'agente di mercato utilizzi un modello di martingala esponenziale di quadrato integrabile per calcolare il prezzo e il portafoglio di replicazione; analizziamo quindi la performance della strategia di delta-hedging quando viene applicata alla traiettoria realizzata dei prezzi del sottostante piuttosto che a una dinamica stocastica.
Innanzitutto, consid\'eriamo il caso in cui disponiamo di un funzionale di prezzo regolare e mostriamo che la replicazione tramite delta-hedging \`e robusta se la derivata verticale seconda del funzionale di prezzo ha lo stesso segno della differenza tra la volatilit\`a del modello e la volatilit\`a realizzata dei prezzi di mercato. Otteniamo cos\`i una formula esplicita per l'errore di replicazione data una traiettoria. Questa formula \`e l'analogo per traiettorie del risultato ottenuto da EL Karoui et al (1997) e la generalizza al caso dipendente dalla traiettoria, senza ricorrere a delle ipotesi probabilistiche o alla propiet\`a di Markov circa la dinamica reale dei prezzi di mercati. Presentiamo infine delle codizioni sufficienti affinch\'e il funzionale di valutazione abbia la regolarit\`a richiesta per tali risultati sullo spazio dei cammini continui.
Questi risultati permettono di analizzare la robustezza delle strategie di replicazione dinamica. Forniamo una condizione sufficiente sul funzionale di payoff che assicura la positivit\`a della derivata verticale seconda del funzionale di prezzo, ovvero la convessit\`a di una certa funzione reale. Analizziamo ugualmente il contributo di salti della traiettoria dei prezzi all'errore di replicazione ottenuto agendo sul mercato secondo la strategia di delta-hedging. Osserviamo che le discontinuit\`a deteriorano la performance della replicazione. Nel caso speciale di un modello Black-Scholes generalizzato utilizzato dall'agente, se il derivato venduto ha un payoff monitorato a tempo discreto, allora il funzionale di prezzo \`e localmente regolare su tutto lo spazio dei cammini continui stoppati e le sue derivate, verticale e orizzontale, sono date in forma esplicita. consid\'eriamo anche il caso di un modello con volatilit\`a dipendente dalla traiettoria dei prezzi, il modello Hobson-Rogers, e mostriamo come il problema di pricing sia anche in quel caso riconducibile all'equazione di pricing universale introdotta nel secondo capitolo. Infine, mostriamo qualche esempio di applicazione della nostra analisi, precisamente la replicazione di opzioni asiatiche e barriera.
L'ultimo capitolo \`e uno studio indipendente dal resto della tesi, sviluppato insieme ad Andrea Pascucci e Stefano Pagliarani, in cui proponiamo un nuovo metodo di approssimazione analatica in modelli a volatilit\`a locale con salti di tipo L\'evy. Il risultato principale \`e un'espansione in serie della funzione caratteristica in un modello di L\'evy locale, ottenuta nello spazio di Fourier consid\'erando la formulazione aggiunta del problema di \lq pricing\rq. Congiuntamente ai metodi di Fourier standard, il nostro risultato fornisce formule di \lq pricing\rq\ efficienti e accurate. Nel caso di salti gaussiani, deriviamo anche un'approssimazione esplicita della densit\`a di transizione del processo sottostante tramite un'espansione con nucleo del calore; tale approssimazione \`e ottenuta in due modi: usando tecniche PIDE e lavorando nello spazio di Fourier. Test numerici confermano l'efficacit\`a del metodo.
\chapter*{R\'esum\'e} \markboth{R\'esum\'e}{}
Cette th\`ese d\'eveloppe une approche trajectorielle pour la mod\'elisation des march\'es financiers en temps continu, sans faire appel \`a des hypoth\`eses probabilistes ou \`a des mo\-d\`eles stochastiques. L'outil principal dans cette th\`ese est le calcul fonctionnel non-anticipatif, un cadre analytique qui remplace le calcul stochastique habituellement utilis\'e en finance math\'ematique.
Nous commen\c cons dans le Chapitre 1 par introduire la th\'eorie de base du calcul fonctionnel non-anticipatif et ses principaux r\'esultats que nous utilisons tout au long de la th\`ese. Le Chapitre 2 d\'etaille la contrepartie probabiliste de ce calcul, le Calcul d'\ito\ fonctionnel, et montre comment ce calcul permet d'\'etendre les r\'esultats classiques sur l'\'evaluation et la couverture des produits d\'eriv\'es au cas des options avec une d\'ependance trajectorielle. Par ailleurs, nous d\'ecrivons la relation entre les \'equations aux d\'eriv\'ees partielles avec coefficients d\'ependant du chemin et les \'equations diff\'erentielles stochastiques r\'etrogrades. Finalement, nous consid\'erons d'autres notions plus faibles de solution \`a ces \'equations aux d\'eriv\'ees partielles avec coefficients d\'ependant du chemin, lesquelles sont utilis\'ees dans la litt\'erature au cas o\`u des solutions classiques n'existent pas.
Ensuite nous mettons en place, dans le Chapitre 3, un mod\'ele de march\'e financier en temps continu, sans hypoth\`eses probabilistes et avec un horizon fini o\`u les temps de transaction sont repr\'esent\'es par une suite embo\^it\'ee de partitions dont le pas converge vers $0$. Nous proposons une condition de plausibilit\'e sur l'ensemble des chemins admissibles du point de vue des conditions trajectorielles de non-arbitrage. Les trajectoires \lq plausibles\rq\ sont r\'ev\'el\'ees avoir une variation quadratique finie, au sens de F\"ollmer, le long de cette suite de partitions.
Nous compl\'etons le cadre en introduisant une notion trajectorielle de strat\'egie auto-finan\c cante sur un ensemble de trajectoires de prix. \\Ces strat\'egies sont d\'efinies comme des limites de strat\'egies simples et auto-finan\c cantes, dont les temps de transactions appartiennent \`a la suite de partitions temporelles fix\'ee. Nous identifions une classe sp\'eciale de strat\'egies de trading que nous prouvons \^etre auto-finan\c cantes et dont le gain peut \^etre calcul\'e trajectoire par trajectoire comme limite de sommes de Riemann. Par ailleurs, nous pr\'esentons un r\'esultat de r\'eplication trajectorielle et une formule analytique explicite pour estimer l'erreur de couverture. Finalement nous d\'efinissons une famille d'op\'erateurs int\'egrals trajectoriels (index\'es par les chemins) comme des isom\'etries entre des espaces norm\'es complets.
Le Chapitre 4 emploie ce cadre th\'eorique pour proposer une analyse trajectorielle des strat\'egies de couverture dynamique. Nous nous int\'eressons en particulier \`a la robustesse de leur performance dans la couverture de produits d\'eriv\'es \emph{path-dependent} monitor\'es en temps continu. Nous supposons que l'agent utilise un mod\`ele de martingale exponentielle de carr\'e int\'egrable pour calculer les prix et les portefeuilles de couverture, et nous analysons la performance de la strat\'egie delta-neutre lorsqu'elle est appliqu\'ee \`a la trajectoire du prix sous-jacent r\'ealis\'e plut\^ot qu'\`a une dynamique stochastique. D'abord nous consid\'erons le cas o\`u nous disposons d'une fonctionnelle de prix r\'eguli\`ere et nous montrons que la couverture delta-neutre est robuste si la d\'eriv\'ee verticale seconde de la fonctionnelle de prix est du m\^eme signe que la diff\'erence entre la volatilit\'e du mod\`ele et la volatilit\'e r\'ealis\'ee du march\'e. Nous obtenons aussi une formule explicite pour l'erreur de couverture sur une trajectoire donn\'ee. Cette formule est l'analogue trajectorielle du r\'esultat de El Karoui et al (1997) et le g\'en\'eralise au cas \emph{path-dependent}, sans faire appel \`a des hypoth\'eses probabilistes ou \`a la propri\'et\'e de Markov. Enfin nous pr\'esentons des conditions suffisantes pour que la fonctionnelle d'\'evaluation ait la r\'egularit\'e requise pour ces r\'esultats sur l'espace des chemins continus.
Ces r\'esultats permettent d'analyser la robustesse des strat\'egies de couverture dynamiques. Nous fournissons une condition suffisante sur la fonctionnelle de payoff qui assure la positivit\'e de la d\'eriv\'e verticale seconde de la fonctionnelle d'\'evaluation, i.e. la convexit\'e d'une certaine fonction r\'eelle. Nous analysons \'egalement la contribution des sauts de la trajectoire des prix \`a l'erreur de couverture obtenue en \'echangeant sur le march\'e selon la strat\'egie delta-neutre. Nous remarquons que les discontinuit\'es d\'et\'eriorent la performance de la couverture. Dans le cas sp\'ecial d'un mod\`ele Black-Scholes g\'en\'eralis\'e utilis\'e par l'agent, si le produit d\'eriv\'e vendu a un payoff monitor\'e en temps discret, alors la fonctionnelle de prix est localement r\'eguli\`ere sur tout l'espace des chemins continus arr\^et\'es et ses d\'eriv\'ees verticale et horizontale sont donn\'ees dans une forme explicite. Nous consid\'erons aussi le cas d'un mod\`ele avec volatilit\'e d\'ependante de la trajectoire des prix, le mod\`ele Hobsons-Rogers, et nous montrons comment le probl\`eme de \lq pricing\rq\ peut encore \^etre r\'eduit \`a l'\'equation universelle introduite dans le Chapitre 2. Finalement, nous montrons quelques applications de notre analyse, notamment la couverture des options Asiatiques et barri\`eres.
Le dernier chapitre, ind\'ependant du reste de la th\`ese, est une \'etude en collaboration avec Andrea Pascucci and Stefano Pagliarani, o\`u nous proposons une nouvelle m\'ethode pour l'approximation analytique dans des mo\-d\`eles \`a volatilit\'e locale avec des sauts de type L\'evy. Le r\'esultat principal est un d\'eveloppement asymptotique de la fonction caract\'eristique dans un mod\`ele de L\'evy local, qui est obtenu dans l'espace de Fourier en consid\'erant la formulation adjointe du probl\`eme de \lq pricing\rq. Associ\'e aux m\'ethodes de Fourier standard, notre r\'esultat fournit des approximations pr\'ecises du prix. Dans le cas de sauts gaussiens, nous d\'erivons aussi une approximation explicite de la densit\'e de transition du processus sous-jacent \`a l'aide d'une expansion avec noyau de la chaleur; cette approximation est obtenue de deux fa\c cons: en utilisant des techniques PIDE et en travaillant dans l'espace de Fourier. Des test num\'eriques confirment l'efficacit\'e de la m\'ethode.
\chapter*{Acknowledgments} \markboth{Acknowledgments}{}
First, I would like to thank my advisor Professor Rama Cont for giving me the valuable opportunity to be under his guidance and to join the team at the \emph{Laboratoire de Probabilit\'e et mod\`eles al\'eatoires} in Paris, for sharing with me his precious insights, and for encouraging my continuation in the academic research. He also tought me to have a more independent attitude to research. I would also like to thank my co-supervisor Professor Sara Biagini for her patient and precious support at the beginning of my research project in Pisa.
I really thank Professors Stefano Marmi, Fabrizio Lillo and Mariano Giaquinta for awarding me the PhD fellowship at the \emph{Scuola Normale Superiore} in Pisa, and Stefano for his helpfulness as my tutor and for the financial support. I am very thankful to Professors Franco Flandoli and Maurizio Pratelli who welcomed me at my arrival in Pisa, let me join their seminars at the department of Mathematics, and were always available for helpful discussions and advises.
I am very thankful to my master thesis advisor, Professor Andrea Pascucci, for guiding me into research, for his sincere advises and his constant availability. I thank as well Professor Pierluigi Contucci for his helpful encouragement and his collaboration, and Professors Paolo Foschi and Hykel Hosni for inviting me to present my ongoing research at the workshops organized respectively at \emph{Imperial College of London} and at \emph{Centro di Ricerca Matematica Ennio De Giorgi} in Pisa.
I would like to thank all my colleagues and friends that shared these three years of PhD with me in Pisa, Paris and Cambridge, especially Adam, Giacomo, Mario, Dario, Giovanni, Laura A., Alessandro, Olga, Fernando, Pierre-Louis, Eric A., Nina, Alice, Tibault, Nils, Hamed. I especially thank Professor Rama Cont's other PhD students, for their friendship and reciprocal support: Yi, Eric S., Laura S. and Anna.
I thank all the administrative teams at \emph{Scuola Normale Superiore} in Pisa, at \emph{Laboratoire de Probabilit\'e et mod\`eles al\'eatoires} in Paris and at the \emph{Newton Institute for Mathematical Sciences} in Cambridge (UK), for being always very helpful. I also thank the French Embassy in Rome for financing my staying in Paris and the \emph{Isaac Newton Institute} for financing my staying in Cambridge.
I am grateful to Professors Ashkan Nikeghbali and Jean-Charles Rochet for inviting me at the University of Zurich, for their interest and trust in my research, and for doing me the honour of welcoming me in their team. I also thank Delia for her friendship and support since my arrival in Zurich.
I am very grateful to Professors Hans F\"ollmer and St\'ephane Cr\'epey for accepting to be referees for this thesis, for helpful comments, and for the very nice and supporting reports. I would like to thank as well Professors Luigi Ambrosio and Peter Tankov for agreeing to be examiners for my defense.
Last but not least, I am infinitely thankful to my parents who have always supported me, my brother Kelvin for making me stronger, and my significant other Thomas for his love and for always inciting me to follow the right path for my studies even when this implied a long distance between us.
\markboth{\MakeUppercase{Contents}}{} \tableofcontents
\chapter*{Notation} \addcontentsline{toc}{chapter}{Notation} \markboth{Notation}{}
\subsubsection{Acronyms and abbreviations} \begin{description}
\item[\cadlag] = right continuous with left limits
\item[\caglad] = left continuous with right limits
\item[SDE] = stochastic differential equation
\item[BSDE] = backward stochastic differential equation
\item[PDE] = partial differential equation
\item[FPDE] = functional partial differential equation
\item[PPDE] = path-dependent partial differential equation
\item[EMM] = equivalent martingale measure
\item[NA] = no-arbitrage condition
\item[NA1] = ``no arbitrage of the first kind'' condition
\item[NFL] = ``no free lunch'' condition
\item[NFLVR] = ``no free lunch with vanishing risk'' condition
\item[s.t.] = such that
\item[a.s.] = almost surely
\item[a.e.] = almost everywhere
\item[e.g.] = exempli gratia $\equiv$ example given
\item[i.e.] = id est $\equiv$ that is \end{description}
\subsubsection{Basic mathematical notation} \begin{description}
\item[$\R^d_+$] = positive orthant in $\R^d$
\item[\mbox{$\DT$ (resp. $D([0,T],\R^d_+)$)}] = space of \cadlag\ functions from $[0,T]$ to $\R^d$ (respectively $\R^d_+$), $d\in\NN$
\item[\mbox{$C([0,T],\R^d_+)$ (resp. $C([0,T],\R^d_+)$)}] = space of continuous functions from $[0,T]$ to $\R^d$ (respectively $\R^d_+$), $d\in\NN$
\item[$\S^d_+$] = set of symmetric positive-definite $d\times d$ matrices
\item[$\FF=\Ft$] = natural filtration generated by the coordinate process
\item[$\FF^X=\Ft^X$] = natural filtration generated by a stochastic process $X$
\item[$\EE^\PP$] = expectation under the probability measure $\PP$
\item[$\xrightarrow{\ \PP\ }$] = limit in probability $\PP$
\item[$\xrightarrow{\ ucp(\PP)\ }$] = limit in the topology defined by uniform convergence on compacts in probability $\PP$
\item[$\cdot$] = scalar product in $\R^d$ (unless differently specified)
\item[$\pqv{\cdot}$] = Frobenius inner product in $\R^{d\times d}$ (unless differently specified)
\item[$\norm{\cdot}_\infty$] = sup norm in spaces of paths, e.g in $D([0,T],\R^d)$, $C([0,T],\R^d)$, $D([0,T],\R^d_+)$, $C([0,T],\R^d_+)$,\ldots
\item[$\norm{\cdot}_p$] = $L^p$-norm, $1\leq p\leq\infty$
\item[{$[\cdot]$} {($[\cdot,\cdot]$)}] = quadratic (co-)variation process
\item[$\bullet$] = stochastic integral operator
\item[$\tr$] = trace operator, i.e. $\tr(A)=\sum_{i=1}^dA_{i,i}$ where $A\in\R^{d\times d}$.
\item[${}^t\!A$] = transpose of a matrix $A$
\item[$x(t-)$] = left limit of $x$ at $t$, i.e. $\lim_{s\nearrow t}x(s)$
\item[$x(t+)$] = right limit of $x$ at $t$, i.e. $\lim_{s\searrow t}x(s)$
\item[$\De x(t)\equiv \De^-x(t)$] = left-side jump of $x$ at $t$, i.e. $x(t)-x(t-)$
\item[$\De^+x(t)$] = right-side jump of $x$ at $t$, i.e. $x(t+)-x(t)$
\item[$\partial_x$] = $\pa{x}$
\item[$\partial_{xy}$] = $\frac{\partial^2}{\partial x \partial y}$ \end{description}
\subsubsection{Functional notation} \begin{description} \item[$x(t)$] = value of $x$ at time $t$, e.g. $x(t)\in\R^d$ if $x\in\DT$; \item[$x_t$] = $x(t\wedge\cdot)\in\DT$ the path of $x$ \lq stopped\rq\ at the time $t$; \item[$x_{t-}$] = $x\ind_{[0,t)}+x(t-)\ind_{[t,T]}\in\DT$; \item[$x_t^\d$] = $x_t+\d\ind_{[t,T]}\in\DT$ the \textit{vertical perturbation} -- of size and direction given by the vector $\d\in\R^d$ -- of the path of $x$ stopped at $t$ over the future time interval $[t,T]$; \item[$\L_T$] = space of (\cadlag) stopped paths \item[$\W_T$] = subspace of $\L_T$ of continuous stopped paths \item[$\dinf$] = distance introduced on the space of stopped paths
\item[$\hd F$] = horizontal derivative of a \naf\ $F$
\item[$\vd F$] = vertical derivative of a \naf\ $F$
\item[$\nabla_X$] = vertical derivative operator defined on the space of square-integrable $\F^X$-martingales \end{description}
\mainmatter
\chapter*{Introduction} \addcontentsline{toc}{chapter}{Introduction} \markboth{\MakeUppercase{Introduction}}{}
\label{chap:intro}
The mathematical modeling of financial markets dates back to 1900, with the doctoral thesis~\citep{bachelier} of Louis Bachelier, who first introduce the Brownian motion as a model for the price fluctuation of a liquid traded financial asset. After a long break, in the mid-sixties, \citet{sam} revived Bachelier's intuition by proposing the use of geometric Brownian motion which, as well as stock prices, remains positive. This became soon a reference financial model, thanks to \citet{bs} and \citet{merton}, who derived closed formulas for the price of call options under this setting, later named the ``Black-Scholes model'', and introduced the novelty of linking the option pricing issue with hedging. The seminal paper by \citet{harr-pliska} linked the theory of continuous-time trading to the theory of stochastic integrals, which has been used ever since as the standard setting in Mathematical Finance.
Since then, advanced stochastic tools have been used to describe the price dynamics of financial assets and its interplay with the pricing and hedging of financial derivatives contingent on the trajectory of the same assets. The common framework has been to model the financial market as a filtered probability space \ps\ under which the prices of liquid traded assets are represented by stochastic processes $X=(X_t)_{t\geq0}$ and the payoffs of derivatives as functionals of the underlying price process. The probability measure $\PP$, also called \textit{real world, historical, physical} or \textit{objective} probability tries to capture the observed patterns and, in the equilibrium interpretation, represents the (subjective) expectation of the ``representative investor''. The objective probability must satisfy certain constraints of market efficiency, the strongest form of which requires $X$ to be a $\Ft$-martingale under $\PP$. However, usually, weaker forms of market efficiency are assumed by no-arbitrage considerations, which translate, by the several versions of the Fundamental Theorem of Asset Pricing (see \cite{schachermayer,schachermayerEMM} and references therein), to the existence of an equivalent \textit{martingale} (or \textit{risk-neutral}) \textit{measure} $\QQ$, that can be interpreted as the expectation of a ``risk-neutral investor'' as well as a consistent price system describing the market consensus. The first result in this stream of literature (concerning continuous-time financial models) is found in \citet{ross78} in 1978, where the \emph{no-arbitrage} condition (NA) is formalized, then major advances came in 1979 by \citet{harr-kreps} and in 1981 by \citet{harr-pliska} and in particular by \citet{kreps81}, who introduced the \emph{no free lunch} condition (NFL), proven to be equivalent to the existence of a local martingale measure. More general versions of the Fundamental Theorem of Asset Pricing are due to \citet{ds94,ds98}, whose most general statement pertains to a general multi-dimensional semimartingale model and establishes the equivalence between the condition of \emph{no free lunch with vanishing risk} (NFLVR) and the existence of a sigma-martingale measure. The model assumption that the price process behaves as a semimartingale comes from the theory of stochastic analysis, since it is known that there is a good integration theory for a stochastic process $X$ if and only if it is a semimartingale. At the same time, such assumption is also in agreement with the financial reasoning, as it is shown in~\cite{ds94} that a very weak form of no free lunch condition, assuring also the existence of an equivalent local martingale measure, is enough to imply that if $X$ is locally bounded then it must be a semimartingale under the objective measure $\PP$. In \cite{ds-book} the authors present in a ``guided tour'' all important results pertaining to this theme.
The choice of an objective probability measure is not obvious and always encompasses a certain amount of model risk and model ambiguity. Recently, there has been a growing emphasis on the dangerous consequences of relying on a specific probabilistic model. The concept of the so-called \textit{Knightian uncertainty}, introduced way back in 1921 by Frank Knight~\citep{knight}, while distinguishing between ``risk'' and ``uncertainty'', is still as relevant today and led to a new challenging research area in Mathematical Finance. More fundamentally, the existence of a single objective probability does not even make sense, agreeing with the criticism raised by \citet{definetti31,definetti37}.
After the booming experienced in the seventies and eighties, in the late eighties the continuous-time modeling of financial markets evoked new interpretations that can more faithfully represent the economic reality.
In the growing flow of literature addressing the issue of model ambiguity, we may recognize two approaches: \begin{itemize} \item \textbf{model-independent}, where the single probability measure $\PP$ is replaced by a family $\P$ of plausible probability measures; \item \textbf{model-free}, that eliminates probabilistic a priori assumptions altogether, and relies instead on pathwise statements. \end{itemize}
The first versions of the Fundamental Theorem of Asset Pricing under model ambiguity are presented in \cite{bouchard-nutz,bfm,abps} in discrete time, and \cite{sara-bkn} in continuous time, using a model-independent approach.
The model-free approach to effectively deal with the issue of model ambiguity also provides a solution to another problem affecting the classical probabilistic modeling of financial markets. Indeed, in continuous-time financial models, the gain process of a self-financing trading strategy is represented as a stochastic integral.
However, despite the elegance of the probabilistic representation, some real concerns arise. Beside the issue of the impossible consensus on a probability measure, the representation of the gain from trading lacks a pathwise meaning: while being a limit in probability of approximating Riemann sums, the stochastic integral does not have a well-defined value on a given \lq state of the world\rq. This causes a gap in the use of probabilistic models, in the sense that it is not possible to compute the gain of a trading portfolio given the realized trajectory of the underlying asset price, which constitutes a drawback in terms of interpretation.
Beginning in the nineties, a new branch of the literature has addressed the issue of pathwise integration in the context of financial mathematics.
The approach of this thesis is probability-free. In the first part, we set up a framework for continuous-time trading where everything has a pathwise characterization. This purely analytical structure allows us to effectively deal with the issue of model ambiguity (or Knightian uncertainty) and the lack of a path-by-path computation of the gain of trading strategies.
A breakthrough in this direction was the seminal paper written by \citet{follmer} in 1981. He proved a pathwise version of the \ito\ formula, conceiving the construction of an integral of a $C^1$-class function of a \cadlag\ path with respect to that path itself, as a limit of non-anticipative Riemann sums. His purely analytical approach does not ask for any probabilistic structure, which may instead come into play only at a later point by considering stochastic processes that satisfy almost surely, i.e. for almost all paths, the analytical requirements. In this case, the so-called \emph{F\"ollmer integral} provides a path-by-path construction of the stochastic integral. F\"ollmer's framework turns out to be of main interest in finance (see also \cite{schied-CPPI}, \cite[Sections 4,5]{follmer-schied}, and \cite[Chapter 2]{sondermann}) as it allows to avoid any probabilistic assumption on the dynamics of traded assets and consequently to avoid any model risk/ambiguity. Reasonably, only observed price trajectories are involved.
In 1994, \citet{bickwill} provided an interesting economic interpretation of F\"ollmer's pathwise calculus, leading to new perspectives in the mathematical modeling of financial markets. Bick and Willinger reduced the computation of the initial cost of a replicating trading strategy to an exercise of analysis. Moreover, for a given price trajectory (state of the world), they showed one is able to compute the outcome of a given trading strategy, that is the gain from trade. Other contributions towards the pathwise characterization of stochastic integrals have been obtained via probabilistic techniques by Wong and Zakai (1965), \citet{bichteler}, \citet{karandikar} and \citet{nutz-int} (only existence), and via convergence of discrete-time economies by \citet{willtaq}.
We are interested only in the model-free approach: we set our framework in a similar way to \cite{bickwill}, and we enhance it by the aid of the pathwise calculus for \naf s, developed by \citet{contf2010}. This theory extends the F\"ollmer's pathwise calculus to a large class of non-anticipative functionals.
Another problem related to the model uncertainty, addressed in the second part of this thesis is the \emph{robustness} of hedging strategies used by market agents to cover the risks involved in the sale of financial derivatives. The issue of robustness came to light in the nineties, dealing mostly with the analysis of the performance, in a given complete model, of pricing and hedging simple payoffs under a mis-specification of the volatility process. The problem under consideration is the following. Let us imagine a market participant who sells an (exotic) option with payoff $H$ and maturity $T$ on some underlying asset which is assumed to follow some model (say, Black-Scholes), at price given by
$$ V_t = E^{\mathbb{Q}}[H|{\cal F}_t]$$ and hedges the resulting profit and loss using the hedging strategy derived from the same model (say, Black-Scholes delta hedge for $H$). However, the {\it true} dynamics of the underlying asset may, of course, be different from the assumed dynamics. Therefore, the hedger is interested in a few questions: How good is the result of the hedging strategy? How 'robust' is it to model mis-specification? How does the hedging error relate to model parameters and option characteristics? In 1998, \citet{elkaroui} provided an answer to the important questions above in the setting of diffusion models, for non-path-dependent options. They provided an explicit formula for the profit and loss, or \textit{tracking error} as they call it, of the hedging strategy. Specifically, they show that if the underlying asset follows a Markovian diffusion $$\mathrm{d} S_t= r(t)S(t)\mathrm{d} t+ S(t)\sigma(t) \mathrm{d} W(t) \qquad \text{under}\ \mathbb{P}$$ such that the discounted price $S/M$ is a square-integrable martingale, then a hedging strategy computed in a (mis-specified) model with local volatility $\sigma_0$, satisfying some technical conditions, leads to a tracking error equal to $$\int_0^T \frac{\sigma_0^{2}(t,S(t))-\sigma^{2}(t)}{2}S(t)^2e^{\int_t^T r(s)\mathrm{d} s}\overbrace{\partial_{xx}^2f(t,S(t))}^{\Gamma(t)}\mathrm{d} t,$$ $\PP$-almost surely. This fundamental equation, called by \citet{davis} \lq the most important equation in option pricing theory\rq, shows that the exposure of a mis-specified delta hedge over a short time period is proportional to the Gamma of the option times the specification error measured in quadratic variation terms.
Other two papers studying the monotonicity and super-replication properties of non-path-dependent option prices under mis-specified models are \cite{bergman} and \cite{hobson}, respectively by PDE and coupling techniques. The robustness of dynamic hedging strategies in the context of model ambiguity has been considered by several authors in the literature (\citet{bickwill,avlevyparas,lyons,cont2006}). \citet{ss} studied the robustness of delta hedging strategies for discretely monitored path-dependent derivatives in a Markovian diffusion (\lq local volatility\rq) model from a pathwise perspective: they looked at the performance of the delta hedging strategy derived from some model when applied to the realized underlying price path, rather than to some supposedly true stochastic dynamics. In the present thesis, we investigate the robustness of delta hedging from this pathwise perspective, but we consider a general square-integrable exponential model used by the hedger for continuously - instead of discretely - monitored path-dependent derivatives. In order to conduct this pathwise analysis, we resort to the pathwise functional calculus developed in \citet{contf2010} and the functional \ito\ calculus developed in \cite{contf2013,cont-notes}. In particular we use the results of \chap{path-trading} of this thesis, which provide an analytical framework for the analysis of self-financing trading strategies in a continuous-time financial market.
The last chapter of this thesis deals with a completely different problem, that is the search for accurate approximation formulas for the price of financial derivatives under a model with local volatility and L\'evy-type jumps. Precisely, we consider a one-dimensional {\it local L\'evy model}: the risk-neutral dynamics of the underlying log-asset process $X$ is given by $$\mathrm{d} X(t)=\m(t,X(t-))\mathrm{d} t+\s(t,X(t)) \mathrm{d} W(t)+ \mathrm{d} J(t),$$ where $W$ is a standard real Brownian motion on a filtered probability space $(\O,\F,(\F_t)_{0\leq t\leq T},\mathbb{P})$ with the usual assumptions on the filtration and $J$ is a pure-jump L\'evy process, independent of $W$, with L\'evy triplet $(\m_{1},0,\n)$. Our main result is a fourth order approximation formula of the characteristic function $\phi_{X^{t,x}(T)}$ of the log-asset price $X^{t,x}(T)$ starting from $x$ at time $t$, that is
$$\phi_{X^{t,x}(T)}(\x)=E^\PP\left[e^{i\x X^{t,x}(T)}\right],\qquad \x\in\R,$$ In some particular cases, we also obtain an explicit approximation of the transition density of $X$.
Local L\'evy models of this form have attracted an increasing interest in the theory of volatility modeling (see, for instance, \cite{AndersenAndreasen2000}, \cite{CGMY2004} and \cite{ContLantosPironneau2011}); however to date only in a few cases closed pricing formulae are available. Our approximation formulas provide a way to compute efficiently and accurately option prices and sensitivities by using standard and well-known Fourier methods (see, for instance, Heston \cite{Heston1993}, Carr and Madan \cite{CarrMadan1999}, Raible \cite{Raible2000} and Lipton \cite{Lipton2002}).
We derive the approximation formulas by introducing an ``adjoint'' expansion method: this is worked out in the Fourier space by considering the adjoint formulation of the pricing problem. Generally speaking, our approach makes use of Fourier analysis and PDE techniques.
The thesis is structured as follows:
\paragraph{Chapter 1} The first chapter introduces the pathwise functional calculus, as developed by Cont and Fourni\'e \cite{contf2010,cont-notes}, and states some of its key results.
The most important theorem is a change-of-variable formula extending the pathwise \ito\ formula proven in \cite{follmer} to \naf s, and applies to a class of paths with finite quadratic variation. The chapther then includes a discussion on the different notions of \emph{quadratic variation} given by different authors in the literature.
\paragraph{Chapter 2} The second chapter presents the probabilistic counterpart of the pathwise functional calculus, the so-called \lq functional \ito\ calculus\rq, following the ground-breaking work of Cont and Fourni\'e \cite{ContFournie09a,contf2013,cont-notes}. Moreover, the weak functional calculus, which applies to a large class of square-integrable processes, is introduced. Then, in \Sec{kolmogorov} we show how to apply the functional \ito\ calculus to extend the relation between Markov processes and partial differential equations to the path-dependent setting. These tools have useful applications for the pricing and hedging of path-dependent derivatives. In this respect, we state the universal pricing and hedging formulas. Finally, in \Sec{PPDE}, we report the results linking forward-backward stochastic differential equations to path-dependent partial differential equations and we recall some of the recent papers investigating weak and viscosity solutions of such path-dependent PDEs.
\paragraph{Chapter 3} \Sec{lit-path} presents a synopsis of the various approaches in the literature attempting a pathwise construction of stochastic integrals, and clarifies the connection with appropriate no-arbitrage conditions. In \Sec{setting}, we set our analytical framework
and we start by defining \emph{simple trading strategies}, whose trading times are covered by the elements of a given sequence of partitions of the time horizon $[0,T]$ and for which the self-financing condition is straightforward. We also remark on the difference between our setting and the ones presented in \Sec{arbitrage} about no-arbitrage and we provide some kind of justification, in terms of a condition on the set of admissible price paths, to the assumptions underlying our main results. In \Sec{self-fin}, we define equivalent self-financing conditions for (non-simple) trading strategies on a set of paths, whose gain from trading is the limit of gains of simple strategies and satisfies the pathwise counterpart equation of the classical self-financing condition. Similar conditions were assumed in \cite{bickwill} for convergence of general trading strategies. In \Sec{gain}, we show the first of the main results of the chapter: in \prop{G} for the continuous case and in \prop{G-cadlag} for the \cadlag\ case, we obtain the path-by-path computability of the gain of path-dependent trading strategies in a certain class of $\R^d$-valued \caglad\ adapted processes, which are also self-financing on the set of paths with finite quadratic variation along $\Pi$. For dynamic asset positions $\phi$ in the vector space of \emph{vertical 1-forms},
the gain of the corresponding self-financing trading strategy is well-defined as a \cadlag\ process $G(\cdot,\cdot;\phi)$ such that
\begin{align*}
G(t,\w;\phi)=&\int_0^t \phi(u,\w_u)\cdot\mathrm{d}^{\Pi}\w \\
=&\lim_{n\rightarrow\infty}\sum_{t^n_i\in\pi^n, t^n_i\leq t}\phi(t_i^n,\w^{n}_{t^n_i})\cdot(\w(t_{i+1}^n)-\w(t_i^n))
\end{align*} for all continuous paths of finite quadratic variation along $\Pi$, where $\w^n$ is a piecewise constant approximation of $\w$ defined in \eq{wn}. In \Sec{replication}, we present a pathwise replication result, \prop{hedge}, that can be seen as the model-free and path-dependent counterpart of the well known pricing PDE in mathematical finance, giving furthermore an explicit formula for the \emph{hedging error}. That is, if a \lq smooth\rq\ \naf\ $F$ solves $$\left\{\bea{ll} \hd F(t,\w_t)+\frac12\tr\lf A(t)\cdot\vd^2F(t,\w_t)\rg=0,\quad t\in[0,T), \w\in Q_A(\Pi) \\ F(T,\w)=H(\w), \end{array}\right.$$ where $H$ is a continuous (in sup norm) payoff functional and $Q_A(\Pi)$ is the set of paths with absolutely continuous quadratic variation along $\Pi$ with density $A$, then the hedging error of the delta-hedging strategy for $H$ with initial investment $F(0,\cdot)$ and asset position $\vd F$ is \begin{equation}\label{eq:er} \frac12\int_{0}^T\tr\lf\vd^2F(t,\w)\cdot \lf A(t)-\tilde A(t)\rg\rg \mathrm{d} t \end{equation} on all paths $\w\in Q_{\tilde A}(\Pi)$. In particular, if the underlying price path $\w$ lies in $Q_A(\Pi)$, the delta-hedging strategy $(F(0,\cdot),\vd F)$ replicates the $T$-claim with payoff $H$ and its portfolio's value at any time $t\in[0,T]$ is given by $F(t,\w_t)$. The explicit error formula \eq{er} is the purely analytical counterpart of the probabilistic formula given in \cite{elkaroui}, where a mis-specification of volatility is considered in a stochastic framework. Finally, in \Sec{isometry} we propose, in \prop{Iw}, the construction of a family of pathwise integral operators (indexed by the paths) as extended isometries between normed spaces defined as quotient spaces.
\paragraph{Chapter 4} The last chapter begins with a review of the results, from the present literature, that focus on the problem of robustness which we are interested in, in particular the propagation of convexity and the hedging error formula for non-path-dependent derivatives, as well as a contribution to the pathwise analysis of path-dependent hedging for discretely-monitored derivatives. In \Sec{path-robust}, we introduce the notion of robustness that we are investigating (see \defin{rob}): the delta-hedging strategy is robust on a certain set $U$ of price paths if it super-replicates the claim at maturity, when trading with the market prices, as far as the price trajectory belongs to $U$. We then state in \prop{robust} a first result which applies to the case where the derivative being sold admits a smooth pricing functional under the model used by the hedger: robustness holds if the second vertical derivative of the value functional, $\vd^2F$, is (almost everywhere) of same sign as the difference between the model volatility and the realized market volatility. Moreover, we give the explicit representation of the \emph{hedging error} at maturity, that is $$\frac12\int_{0}^T \lf\s(t,\w)^2-\s^{\mathrm{mkt}}(t,\w)^2\rg\w^2(t)\vd^2F(t,\w) \mathrm{d} t,$$ where $\s$ is the model volatility and $\s^{\mathrm{mkt}}$ is the realized market volatility, defined by $t\mapsto\s^{\mathrm{mkt}}(t,\w)=\frac1{\w(t)}\sqrt{\frac{\mathrm{d}}{\mathrm{d} t}[w](t)}$. In \Sec{exist}, \prop{exist} provides a constructive existence result for a pricing functional which is twice left-continuously vertically differentiable on continuous paths, given a log-price payoff functional $h$ which is \emph{vertically smooth} on the space of continuous paths (see \defin{vsmooth}). We then show in \Sec{convex}, namely in \prop{convex}, that a sufficient condition for the second vertical derivative of the pricing functional to be positive is the convexity of the real map $$v^H(\cdot;t,\w):\R\rightarrow\R,\quad e\mapsto v^H(e;t,\w)=H\lf\w(1+e\ind_{[t,T]})\rg$$ in a neighborhood of 0. This condition may be readily checked for all path-dependent payoffs. In \Sec{jumps}, we analyze the contribution of jumps of the price trajectory to the hedging error obtained trading on the market according to a delta-hedging strategy. We show in \prop{jumps} that the term carried by the jumps is of negative sign if the second vertical derivative of the value functional is positive. In \Sec{HR}, we consider a specific pricing model with path-dependent volaility, the Hobson-Rogers model. Finally, in \Sec{ex}, we apply the results of the previous sections to common examples, specifically the hedging of discretely monitored path-dependent derivatives, Asian options and barrier options. In the first case, we show in \lem{BS} that in the Black-Scholes model the pricing functional is of class $\Cloc$ and its vertical and horizontal derivatives are given in closed form. Regarding Asian options, both the Black-Scholes and the Hobson-Rogers pricing functional have already been proved to be regular by means of classical results, and, assuming that the market price path lies in the set of paths with absolutely continuous \fqv{\mbox{the}} given sequence of partitions and the model volatility overestimates the realized market volatility, the delta hedge is \emph{robust}. Regarding barrier options, the robustness fails to be satisfied: Black-Scholes delta-hedging strategies for barrier options are not robust to volatility mis-specifications.
\paragraph{Chapter 5}
Chapter 5, independent from the rest of the thesis, is based on joint work with Andrea Pascucci and Stefano Pagliarani.
In \Sec{sec1}, we present the general procedure that allows to approximate analy\-tically the transition density (or the characteristic function), in terms of the solutions of a sequence of nested Cauchy problems. Then we also prove explicit error bounds for the expansion that generalize some classical estimates. In \Sec{Merton} and \Sec{LV-J}, the previous Cauchy problems are solved explicitly by using different approaches. Precisely, in \Sec{Merton} we focus on the special class of local L\'evy models with Gaussian jumps and we provide a heat kernel expansion of the transition density of the underlying process. The same results are derived in an alternative way in Subsection \ref{sec:secsimpl}, by working in the Fourier space.
\Sec{LV-J} contains the main contribution of the chapter: we consider the general class of local L\'evy models and provide high order approximations of the characteristic function. Since all the computations are carried out in the Fourier space, we are forced to introduce {\it a dual formulation} of the approximating problems, which involves the adjoint (forward) Kolmogorov operator. Even if at first sight the adjoint expansion method seems a bit odd, it turns out to be much more natural and simpler than the direct formulation. To the best of our knowledge, the interplay between perturbation methods and Fourier analysis has not been previously studied in finance. Actually our approach seems to be advantageous for several reasons: \begin{enumerate}[(i)]
\item working in the Fourier space is natural and allows to get simple and clear results;
\item we can treat the entire class of L\'evy processes and not only jump-diffusion processes or processes which can be approximated by heat kernel expansions --potentially, we can take as leading term of the expansion every process which admits an explicit characteristic function and not necessarily a Gaussian kernel;
\item our method can be easily adapted to the case of stochastic volatility or multi-asset models;
\item higher order approximations are rather easy to derive and the approximation results are generally very accurate. Potentially, it is possible to derive approximation formulae for the characteristic function and plain vanilla options, at any prescribed order. For example, in Subsection \ref{HOA} we provide also the $3^{\text{rd}}$ and $4^{\text{th}}$ order expansions of the characteristic function, used in the numerical tests of \Sec{numeric}. A Mathematica notebook with the implemented formulae is freely available on \url{https://explicitsolutions.wordpress.com}. \end{enumerate}
Finally, in \Sec{numeric}, we present some numerical tests under the Merton and Variance-Gamma models and show the effectiveness of the analytical approximations compared with Monte Carlo simulation.
\chapter{Pathwise calculus for \naf s} \label{chap:pfc} \chaptermark{Pathwise functional calculus}
This chapter is devoted to the presentation of the pathwise calculus for non-anticipative functionals developed by \citet{contf2010} and having as main result a change of variable formula (also called \emph{chain rule}) for \naf s. This pathwise functional calculus extends the pathwise calculus introduced by F\"ollmer in his seminal paper \emph{Calcul d'\ito\ sans probabilit\'es} in 1981. Its probabilistic counterpart, called the \lq functional \ito\ calculus\rq\ and presented in \chap{fic}, can either stand by itself or rest entirely on the pathwise results, e.g. by introducing a probability measure under which the integrator process is a semimartingale. This shows clearly the pathwise nature of the theory, as well as \citeauthor{follmer} proved that the classical \ito\ formula has a pathwise meaning. Other chain rules were derived in \cite{norvaisa} for extended Riemann-Stieltjes integrals and for a type of one-sided integral similar to F\"ollmer's one.
Before presenting the functional case we are concerned with, let us set the stage by introducing the pathwise calculus for ordinary functions. First, let us give the definition of quadratic variation for a function that we are going to use throughout this thesis and review other notions of quadratic variation.
\section{Quadratic variation along a sequence of partitions} \label{sec:qv} \sectionmark{Quadratic variation of paths}
Let $\Pi=\{\pi_n\}_{n\geq1}$ be a sequence of partitions of $[0,T]$, that is for all $n\geq1$ $\pi_n=(t_i^n)_{i=0,\ldots,m(n)},\;0=t_0^n<\ldots<t_{m(n)}^n=T$. We say that $\Pi$ is \emph{dense} if $\cup_{n\geq1}\pi_n$ is dense in $[0,T]$, or equivalently the mesh $\abs{\pi^n}:=\max_{i=1,\ldots m(n)}|t^n_i-t^n_{i-1}|$ goes to 0 as $n$ goes to infinity, and we say that $\Pi$ is \emph{nested} if $\pi_{n+1}\subset\pi_{n}$ for all $n\in\NN$. \begin{definition}\label{def:qv1}
Let $\Pi$ be a dense sequence of partitions of $[0,T]$, a \cadlag\ function $x:[0,T]\to\R$ is said to be of \emph{finite quadratic variation along $\Pi$} if there exists a non-negative \cadlag\ function $[x]_\Pi:[0,T]\to\R_+$ such that \begin{equation}\label{eq:qv}
\forall t\in[0,T],\quad[x]_\Pi(t)=\Limn\sum_{\stackrel{i=0,\ldots,m(n)-1:}{t^n_i\leq t}}(x(t^n_{i+1})-x(t^n_{i}))^2<\infty \end{equation} and \begin{equation} \label{eq:qv-jumps} [x]_\Pi(t)=[x]_\Pi^c(t)+\sum_{0<s\leq t}\De x^2(s) \zs, \end{equation} where $[x]_\Pi^c$ is a continuous non-decreasing function and $\De x(t):=x(t)-x(t-)$ as usual. In this case, the non-decreasing function $[x]_\Pi$ is called the \emph{quadratic variation of $x$ along $\Pi$}. \end{definition} Note that the quadratic variation $[x]_\Pi$ depends strongly on the sequence of partitions $\Pi$. Indeed, as remarked in \cite[Example 2.18]{cont-notes}, for any real-valued continuous function we can construct a sequence of partition along which that function has null quadratic variation.
In the multi-dimensional case, the definition is modified as follows. \begin{definition}\label{def:qvd} An $\R^d$-valued \cadlag\ function $x$ is of \emph{finite quadratic variation along $\Pi$} if, for all $1\leq i,j\leq d$, $x^i,x^i+x^j$ have finite quadratic variation along $\Pi$. In this case, the function $[x]_\Pi$ has values in the set $\S^+(d)$ of positive symmetric $d\times d$ matrices: $$\forall t\in[0,T],\quad[x]_\Pi(t)=\Limn\sum_{\stackrel{i=0,\ldots,m(n)-1:}{t^n_i\leq t}}\incrx{x}\cdot\,^t\!\!\incrx{x},$$ whose elements are given by \begin{eqnarray*} ([x]_\Pi)_{i,j}(t) &=& \frac12\lf[x^i+x^j]_\Pi(t)-[x^i]_\Pi(t)-[x^j]_\Pi(t)\rg \\
&=& [x^i,x^j]_\Pi^c(t)+\sum_{0<s\leq t}\De x^i(s)\De x^j(s) \end{eqnarray*} for $i,j=1,\ldots d$. \end{definition}
For any set $U$ of \cadlag\ paths with values in $\R$ (or $\R^d$), we denote by $Q(U,\Pi)$ the subset of $U$ of paths having \fqv{\Pi}.
Note that $Q(D([0,T],\R),\Pi)$ is not a vector space, because assuming $x^1,x^2\in\penalty0 Q(D([0,T],\R),\Pi)$ does not imply $x^1+x^2\in Q(D([0,T],\R),\Pi)$ in general. This is the reason of the additional requirement $x^i+x^j\in Q(D([0,T],\R),\Pi)$ in \defin{qvd}. As remarked in \cite[Remark 2.20]{cont-notes}, the subset of paths $x$ being $C^1$-functions of a same path $\w\in D([0,T],\R^d)$, i.e. $$\{x\in Q(D([0,T],\R),\Pi),\;\exists f\in C^1(\R^d,\R),\,x(t)=f(\w(t))\,\forall t\in[0,T]\},$$ is instead closed with respect to the quadratic variation composed with the sum of two elements.
Henceforth, when considering a function $x\in Q(U,\Pi)$, we will drop the subscript in the notation of its quadratic variation, thus denoting $[x]$ instead of $[x]_\Pi$.
\subsection{Relation with the other notions of quadratic variation}
An important distinguish is between \defin{qv1} and the notions of $2$-variation and local 2-variation considered in the theory of extended Riemann-Stieltjes integrals (see e.g. \citet[Chapters 1,2]{dudley-norvaisa} and \citet[Section 1]{norvaisa}). Let $f$ be any real-valued function on $[0,T]$ and $0<p<\infty$, the \emph{$p$-variation} of $f$ is defined as \begin{equation}\label{eq:p-var} v_p(f):=\sup_{\k\in P[0,T]}s_p(f;\k) \end{equation} where $P[0,T]$ is the set of all partitions of $[0,T]$ and $$s_p(f;\k)=\sum_{i=1}^n\abs{f(t_i)-f(t_{i-1})}^p,\quad\text{for }\k=\{t_i\}_{i=0}^n\in P[0,T].$$ The set of functions with finite $p$-variation is denoted by $\W_p$. We also denote by $\mathrm{vi}(f)$ the variation index of $f$, that is the unique number in $[0,\infty]$ such that $$\bea{l}v_p(f)<\infty,\quad\mbox{for all }p>\mathrm{vi}(f),\\v_p(f)=\infty,\quad\mbox{for all }p<\mathrm{vi}(f) \end{array}.$$
For $1<p<\infty$, $f$ has the \emph{local $p$-variation} if the directed function $(s_p(f;\cdot),\mathfrak R)$, where $\mathfrak R:=\{\mathcal R(\k)=\left\{\pi\in P[0,T],\,\k\subset\pi\},\,\k\in P[0,T]\right\}$, converges. An equivalent characterization of functions with local $p$-variation was introduced by \citet{love-young} and it is given by the Wiener class $\W^*_p$ of functions $f\in\W_p$ such that $$\limsup_{\k,\mathfrak R}s_p(f;\k)=\sum_{(0,T]}\abs{\De^-f}^p+\sum_{[0,T)}\abs{\De^+f}^p,$$ where the two sums converge unconditionally. We refer to \cite[Appendix A]{norvaisa} for convergence of directed functions and unconditionally convergent sums. The Wiener class satisfies $\cup_{1\leq q<p}\W_q\subset\W_p^*\subset \W_p$.
A theory on Stieltjes integrability for functions of bounded $p$-variation was developed by \citet{young36,young38} in the thirties and generalized among others by \cite{dudley-norvaisa99,norvaisa02} around the years 2000. According to Young's most well known theorem on Stieltjes integrability, if \begin{equation}\label{eq:ys} f\in\W_p,\;g\in\W_q,\quad p^{-1}+q^{-1}>1,\,p,q>0, \end{equation} then the integral $\int_0^Tf\mathrm{d} g$ exists: in the \emph{Riemann-Stieltjes} sense if $f,g$ have no common discontinuities, in the \emph{refinement Riemann-Stieltjes} sense if $f,g$ have no common discontinuities on the same side, and always in the \emph{Central Young} sense. \cite{dudley-norvaisa99} showed that under condition \eq{ys} also the \emph{refinement Young-Stieltjes} integral always exists. However, in the applications, we often deal with paths of unbounded 2-variation, like sample paths of the Brownian motion. For example, given a Brownian motion $B$ on a complete probability space $(\O,\F,\PP)$, the pathwise integral $(RS)\!\int_0^Tf\mathrm{d} B(\cdot,\w)$ is defined in the Riemann-Stieltjes sense, for $\PP$-almost all $\w\in\O$, for any function having bounded $p$-variation for some $p<2$, which does not apply to sample paths of $B$. In particular, in Mathematical Finance, one necessarily deals with price paths having unbounded 2-variation. In the special case of a market with continuous price paths, as shown in \Sec{arbitrage}, \cite{vovk-proba} proved that non-constant price paths must have a variation index equal to 2 and infinite 2-variation in order to rule out \lq arbitrage opportunities of the first kind\rq. In the special case where the integrand $f$ is replaced by a smooth function of the integrator $g$, weaker conditions than \eq{ys} on the $p$-variation are sufficient (see \cite{norvaisa02} or the survey in \cite[Chapter 2.4]{norvaisa}) to obtain chain rules and integration-by-parts formulas for extended Riemann-Stieltjes integrals, like the refinement Young-Stieltjes integral, the symmetric Young-Stieltjes integral, the Central Young integral, the Left and Right Young integrals, and others. However, these conditions are still quite restrictive.
As a consequence, other notions of quadratic variation were formulated and integration theories for them followed.
\subsubsection{\follmer's quadratic variation and pathwise calculus}
In 1981, \citet{follmer} derived a pathwise version of the \ito\ formula, conceiving a construction path-by-path of the stochastic integral of a special class of functions. His purely analytic approach does not ask for any probabilistic structure, which may instead come into play only in a later moment by considering stochastic processes that satisfy almost surely, i.e. for almost all paths, a certain condition. F\"ollmer considers functions on the half line $[0,\infty)$, but we present here his definitions and results adapted to the finite horizon time $[0,T]$. His notion of quadratic variation is given in terms of weak convergence of measures and is renamed here in his name in order to make the distinguish between the different definitions. \begin{definition}\label{def:qv-follmer} Given a dense sequence $\Pi=\{\pi_n\}_{n\geq1}$ of partitions of $[0,T]$, for $n\geq1\; \pi_n=(t_i^n)_{i=0,\ldots,m(n)}$, $0=t_0^n<\ldots<t_{m(n)}^n<\infty$, a \cadlag\ function $x:[0,T]\to\R$ is said to have \emph{F\"ollmer's quadratic variation along} $\Pi$ if the Borel measures \begin{equation}\label{eq:xin} \xi_n:=\sum\limits_{i=0}^{m(n)-1}\incrx{x}^2\d_{t_i^n}, \end{equation} where $\d_{t_i^n}$ is the Dirac measure centered in $t_i^n$, converge weakly to a finite measure $\xi$ on $[0,T]$ with cumulative function $[x]$ and Lebesgue decomposition \begin{equation}\label{eq:dec-follmer} [x](t)=[x]^c(t)+\sum_{0<s\leq t}\De x^2(s),\quad \forall t\in[0,T] \end{equation} where $[x]^c$ is the continuous part. \end{definition}
\begin{proposition}[Follmer's pathwise \ito\ formula]
Let $x:[0,T]\to\R$ be a \cadlag\ function having F\"ollmer's quadratic variation along $\Pi$.
Then, for all $t\in[0,T]$, a function $f\in\C^2(\R)$ satisfies \begin{align}
\label{eq:follmer_ito}\nonumber f(x(t))={}& f(x(0))+\int_0^tf'(x(s-))\mathrm{d} x(s)+\frac12\int_{(0,t]}f''(x(s-))\mathrm{d}[x](s) \\ \nonumber &{}+\sum_{0<s\leq t}\lf f(x(s))-f(x(s-))-f'(x(s-))\De x(s)-\frac12f''(x(s-))\De x(s)^2 \rg \\\nonumber ={}& f(x(0))+\int_0^tf'(x(s-))\mathrm{d} x(s)+\frac12\int_{(0,t]}f''(x(s))\mathrm{d}[x]^c(s) \\ &{}+\sum_{0<s\leq t}\lf f(x(s))-f(x(s-))-f'(x(s-))\De x(s) \rg, \end{align} where the pathwise definition \begin{equation} \label{eq:follmer_int} \int_0^tf'(x(s-))\mathrm{d} x(s):=\Limn \sum_{t_i^n\leq t}f'(x(t_i^n))\lf x(t_{i+1}^n\wedge T)-x(t_i^n\wedge T)\rg \end{equation} is well posed by absolute convergence. \end{proposition} The integral on the left-hand side of \eq{follmer_int} is referred to as the \emph{F\"ollmer integral} of $f\circ x$ with respect to $x$ along $\Pi$.
In the multi-dimensional case, where $x$ is $\R^d$-valued and $f\in\C^2(\R^d)$, the pathwise \ito\ formula gives \begin{align} \nonumber f(x(t))={}& f(x(0))+\int_0^t\nabla f(x(s-))\cdot \mathrm{d} x(s)+\frac12\int_{(0,t]}\mathrm{tr}\lf \nabla^2f(x(s))\mathrm{d}[x]^c(s) \rg\\\label{eq:follmer_Dito} &{}+\sum_{0<s\leq t}\lf f(x(s))-f(x(s-))-\nabla f(x(s-))\cdot\De x(s) \rg \end{align} and $$\int_0^t\nabla f(x(s-))\cdot \mathrm{d} x(s):=\Limn \sum_{t_i^n\leq t}\nabla f(x(t_i^n))\cdot\incrx{x},$$ where $[x]=([x^i,x^j])_{i,j=1,\ldots,d}$ and, for all $t\geq0$, \begin{align*} [x^i,x^j](t)={}&\frac12\lf[x^i+x^j](t)-[x^i](t)-[x^j](t)\rg\\ {}={}&[x^i,x^j]^c(t)+\sum_{0<s\leq t}\De x^i(s)\De x^j(s). \end{align*} F\"ollmer also pointed out that the class of functions with finite quadratic variation is stable under $\C^1$ transformations and, given $x$ with finite quadratic variation along $\Pi$ and $f\in\C^1(\R^d)$, the composite function $y=f\circ x$ has finite quadratic variation $$[y](t)=\int_{(0,t]}\mathrm{tr}\lf \nabla^2f(x(s))^\mathrm{t}\mathrm{d}[x]^c(s)\rg+\sum_{0<s\leq t}\De y^2(s).$$
Further, he has enlarged the scope of the above results by considering stochastic processes with almost sure finite quadratic variation along some proper sequence of partition. For example, let $S$ be a semimartingale on a probability space \ps, it is well known that there exists a sequence of random partitions, $\Pi=(\pi_n)_{n\geq1}$, $|\pi_n|\limn0$ $\PP$-almost surely, such that $$\PP\lf\{\w\in\O,\; S(\cdot,\w) \text{ has F\"ollmer's quadratic variation along }\Pi\}\rg=1.$$ More generally, this holds for any so-called \textit{Dirichlet} (or \textit{finite energy}) \textit{process}, that is the sum of a semimartingale and a process with zero quadratic variation along the dyadic subdivisions. Thus, the pathwise \ito\ formula holds and the pathwise F\"ollmer integral is still defined for all paths outside a null set.
A last comment on the link between \ito\ and \follmer\ integrals is the following. For a semimartingale $X$ and a \cadlag\ adapted process $H$, we know that, for any $t\geq0$, $$ \sum_{t_i^n\leq t} H(t_i^n)\cdot\incrx{x} \limnp \int_0^tH(s-)\cdot\mathrm{d} X(s),$$ hence we have almost sure pathwise convergence by choosing properly an absorbing set of paths dependent on $H$, which is not of practical utility. However, in the case $H=f\circ X$ with $f\in\C^1$, we can select a priori the null set out of which the definition~\eqref{eq:follmer_int} holds and so, by almost sure uniqueness of the limit in probability, the F\"ollmer integral must coincide almost surely with the \ito\ integral.
\subsubsection{Norvai\u sa's quadratic variation and chain rules}
Norvai\u sa's notion of quadratic variation was proposed in \cite{norvaisa} in order to weaken the requirement of local 2-variation used to prove chain rules and integration-by-parts formulas for extended Riemann-Stieltjes integrals. \begin{definition}\label{def:qv-norvaisa}
Given a dense nested sequence $\l=\{\l_n\}_{n\geq1}$ of partitions of $[0,T]$, \emph{Norvai\u sa's quadratic $\l$-variation} of a regulated function $f:[0,T]\to\R$ is defined, if it exists, as a regulated function $H:[0,T]\to\R$ such that $H(0)=0$ and, for any $0\leq s\leq t\leq T$, \begin{equation}\label{eq:Nqv} H(t)-H(s)=\Limn s_2(f;\l_n\Cap[s,t]), \end{equation} \begin{equation}\label{eq:Njumps} \De^-H(t)=(\De^-f(t))^2\quad \text{and}\quad \De^+H(t)=(\De^+f(t))^2, \end{equation} where $\l_n\Cap[s,t]:=(\l_n\cap[s,t])\cup\{s\}\cup\{t\}$, $\De^-x(t)=x(t)-x(t-)$, and $\De^+x(t)=x(t+)-x(t)$. \end{definition} In reality, Norvai\u sa's original definition is given in terms of an additive upper continuous function defined on the simplex of extended intervals of $[0,T]$, but he showed the equivalence to the definition given here and we chose to report the latter because it allows us to avoid introducing further notations.
Following F\"ollmer's approach in \cite{follmer}, \citet{norvaisa} also proved a chain rule for a function with finite $\l$-quadratic variation, involving a new type of integrals called Left (respectively Right) Cauchy $\l$-integrals. We report here the formula obtained for the left integral, but a symmetric formula holds for the right integral.
Given two regulated functions $f,g$ on $[0,T]$ and a dense nested sequence of partitions $\l=\{\l_n\}$, then \emph{the Left Cauchy $\l$-integral} $(LC)\!\int \phi\mathrm{d}_\l g$ is defined on $[0,T]$ if there exists a regulated function $\Phi$ on $[0,T]$ such that $\Phi(0)=0$ and, for any $0\leq u<v\leq T$, $$\bea{c}\Phi(v)-\Phi(u)=\Limn S_{LC}(\phi,g;\l_n\Cap[u,v]),\\ \De^-\Phi(v)=\phi(v-)\De^-g(v),\quad\De^+\Phi(u)=\phi\De^+g(u), \end{array}$$ where $$S_{LC}(\phi,g;\k):=\sum_{i=0}^{m-1}\phi(t_i)(g(t_{i+1})-g(t_i))\quad\text{for any }\k=\{t_i\}_{i=0}^m.$$ In such a case, denote $(LC)\!\int_u^v\phi\mathrm{d}_\l g:=\Phi(v)-\Phi(u).$ \begin{proposition}[Proposition 1.4 in \cite{norvaisa}]
Let $g$ be a regulated function on $[0,T]$ and $\l=\{\l_n\}$ a dense nested sequence of partitions such that $\{t:\,\De^+g(t)\neq0\}\subset\cup_{n\in\NN}\l_n$. The following are equivalent:
\begin{enumerate}[(i)]
\item $g$ has Norvai\u sa's $\l$-quadratic variation;
\item for any $C^1$ function $\phi$, $\phi\circ g$ is Left Cauchy $\l$-integrable on $[0,T]$ and, for any $0\leq u<v\leq T$, \begin{align}\label{eq:chainrule-LC} \Phi\circ g(v)-\Phi\circ g(u)={}&(LC)\!\int_u^v(\phi\circ g)\mathrm{d}_\l g+\frac12\int_u^v(\phi'\circ g)\mathrm{d}[g]^c_\l\\ &{}+\sum_{t\in[u,v)}\lf\De^-(\Phi\circ g)(t)-(\phi\circ g)(t-)\De^-g(t)\rg \nonumber\\ &{}+\sum_{t\in(u,v]}\lf\De^+(\Phi\circ g)(t)-(\phi\circ g)(t)\De^+g(t)\rg. \nonumber \end{align}
\end{enumerate} \end{proposition} Note that the change of variable formula \eq{chainrule-LC} gives the F\"ollmer's formula \eq{follmer_ito} when $g$ is right-continuous, and the Left Cauchy $\l$-integral coincides with the F\"ollmer integral along $\l$ defined in \eq{follmer_int}.
\subsubsection{Vovk's quadratic variation} \citet{vovk-cadlag} defines a notion of quadratic variation along a sequence of partitions not necessarily dense in $[0,T]$ and uses it to investigate the properties of \lq typical price paths\rq, that are price paths which rule out arbitrage opportunities in his pathwise framework, following a game-theoretic probability approach. \begin{definition}\label{def:qv-vovk}
Given a nested sequence $\Pi=\{\pi_n\}_{n\geq1}$ of partitions of $[0,T]$, $\pi_n=(t_i^n)_{i=0,\ldots,m(n)}$ for all $n\in\NN$, a \cadlag\ function $x:[0,T]\to\R$ is said to have \emph{Vovk's quadratic variation along} $\Pi$ if the sequence $\{A^{n,\Pi}\}_{n\in\NN}$ of functions defined by $$A^{n,\Pi}(t):=\sum_{i=0}^{m(n)-1}(x(t^n_{i+1}\wedge t)-x(t^n_i\wedge t))^2,\quad t\in[0,T],$$ converges uniformly in time. In this case, the limit is denoted by $A^\Pi$ and called the Vovk's quadratic variation of $x$ along $\Pi$. \end{definition} An interesting result in \cite{vovk-cadlag} is that typical paths have the Vovk's quadratic variation along a specific nested sequence $\{\t_n\}_{n\geq1}$ of partitions composed by stopping times and such that, on each realized path $\w$, $\{\t_n(\w)\}_{n\geq1}$ \emph{exhausts} $\w$, i.e. $\{t:\,\De\w(t)\neq0\}\subset\cup_{n\in\NN}\t_n(\w)$ and, for each open interval $(u,v)$ in which $\w$ is not constant, $(u,v)\cap(\cup_{n\in\NN}\t_n(\w))\neq\emptyset$.
The most evident difference between definitions \ref{def:qv1}, \ref{def:qv-follmer}, \ref{def:qv-norvaisa}, \ref{def:qv-vovk} is that the first two of them require the sequence of partitions to be dense, the third one requires the sequence of partitions to be dense and nested, and the last one requires a nested sequence of partitions. Moreover, Norvai\v sa's definition is given for a regulated, rather than \cadlag, function.
Vovk proved that for a nested sequence $\Pi=\{\pi_n\}_{n\geq1}$ of partitions of $[0,T]$ that exhausts $\w\in D([0,T],\R)$, the following are equivalent: \begin{enumerate}[(a)] \item $\w$ has Norvai\v sa's quadratic $\Pi$-variation; \item $\w$ has Vovk's quadratic variation along $\Pi$; \item $\w$ has \emph{weak quadratic variation of $\w$ along $\Pi$}, i.e. there exists a \cadlag\ function $V:[0,T]\to\R$ such that $$V(t)=\Limn\sum_{i=0}^{m(n)-1}(x(t^n_{i+1}\wedge t)-x(t^n_i\wedge t))^2$$ for all points $t\in[0,T]$ of continuity of $V$ and it satisfies \eq{qv-jumps} where $[x]_\Pi$ is replace by $V$. \end{enumerate} Moreover, if any of the above condition is satisfied, then $H=A^\Pi=V$.
If, furthermore, $\Pi$ is also dense, than $\w$ has F\"ollmer's quadratic variation along $\Pi$ if and only if it has any of the quadratic variations in (a)-(c), in which case $H=A^\Pi=V=[\w]$.
In this thesis, we will always consider the quadratic variation of a \cadlag\ path $w$ along a dense nested sequence $\Pi$ of partitions that exhausts $\w$, in which case our \defin{qv1} is equivalent to all the other ones mentioned above. It is sufficient to note that condition (b) implies that $\w$ has finite quadratic variation according to \defin{qv1} and $[\w]=A$, because the properties in \defin{qv1} imply the ones in \defin{qv-follmer}, which, by Proposition 4 in \cite{vovk-cadlag}, imply condition (b). Therefore, we denote $\bar k(n,t):=\max\{i=0,\ldots,m(n)-1:\,t^n_i\leq t\}$ and note that \begin{multline*} A^{n,\Pi}(t)-\sum_{\stackrel{i=0,\ldots,m(n)-1:}{t^n_i\leq t}}(x(t^n_{i+1})-x(t^n_{i}))^2=\\ =(\w(t)-\w(t^n_{\bar k(n,t)}))^2-(\w(t^n_{\bar k(n,t)+1})-\w(t^n_{\bar k(n,t)}))^2\limn 0 \end{multline*} by right-continuity of $\w$ if $t\in\cup_{n\in\NN}\pi_n$, and by the assumption that $\Pi$ exhausts $\w$ if $t\notin\cup_{n\in\NN}\pi_n$.
\section{Non-anticipative functionals}\label{sec:pre}
First, we resume the functional notation we are adopting in this thesis, according to the lecture notes \cite{cont-notes}, which unify the different notations from the present papers on the subject into a unique clear language.
As usual, we denote by $\DT$ the space of \cadlag\ functions on $[0,T]$ with values in $\R^d$. Concerning maps $x\in\DT$, for any $t\in[0,T]$ we denote: \begin{itemize} \item $x(t)\in\R^d$ its value at $t$; \item $x_t=x(t\wedge\cdot)\in\DT$ its path \lq stopped\rq\ at time $t$; \item $x_{t-}=x\ind_{[0,t)}+x(t-)\ind_{[t,T]}\in\DT$;
\item for $\d\in\R^d$, $x_t^\d=x_t+\d\ind_{[t,T]}\in\DT$ the \textit{vertical perturbation} of size $\d$ of the path of $x$ stopped at $t$ over the future time interval $[t,T]$; \end{itemize}
A \textit{non-anticipative functional} on $\DT$ is defined as a family of functionals on $\DT$ adapted to the natural filtration $\FF=(\F_t)_{t\in[0,T]}$ of the canonical process on $\DT$, i.e. $F=\{F(t,\cdot),\,t\in[0,T]\},$ such that $$\forall t\in[0,T],\quad F(t,\cdot):\DT\mapsto\R\text{ is }\F_t\text{-measurable}.$$ It can be viewed as a map on the space of 'stopped' paths $\L_T:=\{(t,x_t):\:(t,x)\in[0,T]\times\DT\}$, that is in turn the quotient of $[0,T]\times\DT$ by the equivalence relation $\sim$ such that $$\forall(t,x),(t',x')\in[0,T]\times\DT,\quad(t,x)\sim(t',x') \iff t=t',x_t=x'_{t}.$$ Thus, we will usually write a \naf\ as a map $F:\L_T\to\R^d$.
The space $\L_T$ is equipped with a distance $\dinf$, defined by
$$\dinf((t,x),(t',x'))=\sup_{u\in[0,T]}|x(u\wedge t)-x'(u\wedge t')|+|t-t'|=||x_t-x'_{t'}||_\infty+|t-t'|,$$ for all $(t,x),(t',x')\in\L_T$. Note that $(\L_T,\dinf)$ is a complete metric space and the subset of continuous stopped paths, $$\W_T:=\{(t,x)\in\L_T:\,x\in\C([0,T],\R^d)\},$$ is a closed subspace of $(\L_T,\dinf)$.
We recall here all the notions of functional regularity that will be used henceforth. \begin{definition}\label{def:regF} A \naf\ $F$ is: \begin{itemize} \item \emph{continuous at fixed times} if, for all $t\in[0,T]$,
$$F(t,\cdot):\lf\lf\{t\}\times\DT\rg/\sim,||\cdot||_\infty\rg\mapsto\R$$ is continuous, that is $$\bea{c}\forall x\in\DT, \forall\e>0,\,\exists\eta>0:\quad \forall x'\in\DT,\\
||x_t-x'_t||_\infty<\eta\quad\Rightarrow\quad|F(t,x)-F(t,x')|<\e; \end{array}$$ \item \emph{jointly-continuous}, i.e. $F\in\CC^{0,0}(\L_T)$, if $F:\lf\L_T,\dinf\rg\to\R$ is continuous; \item \emph{left-continuous}, i.e. $F\in\CC_l^{0,0}(\L_T)$, if $$\bea{c}\forall (t,x)\in\L_T, \forall\e>0,\,\exists\eta>0:\quad \forall h\in[0,t],\,\forall (t-h,x')\in\L_T,\\
\quad \dinf((t,x),(t-h,x'))<\eta\quad\Rightarrow\quad|F(t, x)-F(t-h,x')|<\e; \end{array}$$ a symmetric definition characterizes the set $\CC_r^{0,0}(\L_T)$ of \emph{right-continuous} functionals; \item \emph{boundedness-preserving}, i.e. $F\in\BB(\L_T)$, if, $$\bea{c}\forall K\subset\R^d\text{ compact, }\forall t_0\in[0,T],\,\exists C_{K,t_0}>0;\quad \forall t\in[0,t_0],\,\forall (t,x)\in\L_T,\\
x([0,t])\subset K \Rightarrow |F(t,x)|<C_{K,t_0}. \end{array}$$
\end{itemize} \end{definition}
Now, we recall the notions of differentiability for \naf s. \begin{definition}\label{def:derF}
A \naf\ $F$ is said: \begin{itemize} \item \emph{horizontally differentiable at} $(t,x)\in\L_T$ if the limit $$\lim_{h\rightarrow0^+}\frac{F(t+h,x_{t})-F(t,x_t)}{h}$$ exists and is finite, in which case it is denoted by $\hd F(t,x)$; if this holds for all $(t,x)\in\L_T$ and $t<T$, then the \naf\ $\hd F=(\hd F(t,\cdot))_{t\in[0,T)}$ is called the \emph{horizontal derivative} of $F$; \item \emph{vertically differentiable at} $(t,x)\in\L_T$ if the map $$\R^d\rightarrow\R,\; e\mapsto F(t, x_t^e)$$ is differentiable at 0 and in this case its gradient at 0 is denoted by $\vd F(t, x)$;
if this holds for all $(t,x)\in\L_T$, then the $\R^d$-valued \naf\ $\vd F=(\vd F(t,\cdot))_{t\in[0,T]}$ is called the \emph{vertical derivative} of $F$. \end{itemize} \end{definition}
Then, the class of smooth functionals is defined as follows: \begin{itemize} \item $\CC^{1,k}(\L_T)$ the set of \naf s $F$ which are \begin{itemize} \item horizontally differentiable with $\hd F$ continuous at fixed times, \item $k$ times vertically differentiable with $\vd^j F\in\CC^{0,0}_l(\L_T)$ for $j=0,\ldots,k$; \end{itemize} \item $\CC^{1,k}_b(\L_T)$ the set of \naf s $F\in\CC^{1,k}(\L_T)$ such that $\hd F,\vd F,\ldots,\vd^kF\in\BB(\L_T)$. \end{itemize} However, many examples of functionals in applications fail to be globally smooth, especially those involving exit times. Fortunately, the global smoothness characterizing the class $\Cb(\L_T)$ is in fact sufficient but not necessary to get the functional \ito\ formula. Thus, we will often require only the following weaker property of local smoothness, introduced in \cite{fournie}.
A \naf\ $F$ is said to be \emph{locally regular}, i.e. $F\in\Cloc(\L_T)$, if $F\in\CC^{0,0}(\L_T)$ and there exist a sequence of stopping times $\{\t_k\}_{k\geq0}$ on $(\DT,\F_T,\FF)$, such that $\t_0=0$ and $\t_k\to_{k\to\infty}\infty$, and a family of \naf s $\{F^k\in\Cb(\L_T)\}_{k\geq0},$
such that $$F(t,x_t)=\sum_{k\geq0}F^k(t,x_t)\ind_{[\t_k( x),\t_{k+1}(x))}(t) \zs.$$
\section{Change of variable formulae for functionals}
In 2010, \citet{contf2010} extended the \follmer's change of variable formula to \naf s on $\DT$, hence allowing to define an analogue of the \follmer\ integral for functionals. The pathwise formulas are also viable for a wide class of stochastic process in an ``almost-sure'' sense. The setting of \citet{contf2010} is more general than what we need, so we report here its main results in a simplified version.
\begin{remark}[Proposition 1 in \cite{contf2010}]\label{rmk:regularity} Useful pathwise regularities follow from the continuity of \naf s: \begin{enumerate} \item If $F\in\CC_l^{0,0}(\L_T)$, then for all $x\in\DT$ the path $t\mapsto F(t,x_{t-})$ is left-continuous; \item If $F\in\CC_r^{0,0}(\L_T)$, then for all $x\in\DT$ the path $t\mapsto F(t,x_t)$ is right-continuous; \item If $F\in\CC^{0,0}(\L_T)$, then for all $x\in\DT$ the path $t\mapsto F(t,x_t)$ is \cadlag\ and continuous at each point where $x$ is continuous. \item If $F\in\BB(\L_T)$, then $\forall x\in\DT$ the path $t\mapsto F(t,x_t)$ is bounded. \end{enumerate} \end{remark}
Below is one of the main results of \cite{contf2010}: the \emph{change of variable formula for \naf s of \cadlag\ paths}. We only report the formula for \cadlag\ paths because the change of variable formula for functionals of continuous paths (\cite[Theorem 3]{contf2010}) can then be obtained with straightforward modifications.
\begin{theorem}[Theorem 4 in \cite{contf2010}]\label{thm:fif-d}
Let $x\in Q(\DT,\Pi)$ such that \begin{equation} \label{eq:ass_w}
\sup\limits_{t\in[0,T]\setminus\pi^n}|\De x(t)|\limn0. \end{equation}
and denote \begin{equation}\label{eq:wn} x^n:=\sum_{i=0}^{m(n)-1}x(t^n_{i+1}-)\ind_{[t^n_i,t^n_{i+1})}+x(T)\ind_{\{T\}} \end{equation} Then, for any $F\in\Cloc(\L_T)$, the limit \begin{equation} \label{eq:int-d}
\Limn\sum_{i=0}^{m(n)-1}\vd F(t_i^n,x^{n,\De x(t^n_i)}_{t^n_i-})(x(t_{i+1}^n)-x(t_i^n)) \end{equation} exists, denoted by $\int_0^T\vd F(t,x_{t-})\cdot\mathrm{d}^{\Pi}x$, and \begin{align} \label{eq:fif-d}
F(T,x)={} & F(0,x)+\int_0^T\vd F(t,x_{t-})\cdot\mathrm{d}^{\Pi}x+ \\
&{} +\int_0^T\hd F(t,x_{t-})\mathrm{d} t+\int_0^T\frac12\tr\lf\vd^2F(t,x_{t-})\mathrm{d}[x]_\Pi^c(t)\rg+\nonumber \\
&{} +\sum_{u\in(0,T]}\lf F(u,x)-F(u,x_{u-})-\vd F(u,x_{u-})\cdot\De x(u)\rg.\nonumber \end{align} \end{theorem} Note that the assumption \eq{ass_w} can always be removed, simply by including all jump times of the \cadlag\ path $\w$ in the fixed sequence of partitions $\Pi$. Hence, in the sequel we will omit such an assumption.
The proof, in the simpler case of continuous paths, turns around the idea of rewriting the variation of $F(\cdot,x)$ on \OT\ as the limit for $n$ going to infinity of the sum of the variations of $F(\cdot,x^n)$ on the consecutive time intervals in the partition $\pi^n$. In particular, these variations can be decomposed along two directions, horizontal and vertical. That is: $$F(T,x_T)-F(0,x_{0})=\Limn\sum_{i=0}^{m(n)-1}\lf F(t_{i+1}^n,x^{n}_{t^n_{i+1}-})-F(t_{i}^n,x^{n}_{t^n_{i}-})\rg,$$ where \begin{align} F(t_{i+1}^n,x^{n}_{t^n_{i+1}-})-F(t_{i}^n,x^{n}_{t^n_{i}-})={}&F(t_{i+1}^n,x^{n}_{t^n_i})-F(t_{i}^n,x^{n}_{t^n_{i}})\label{eq:incr1}\\ &{}+F(t_{i}^n,x^{n}_{t^n_i})-F(t_{i}^n,x^{n}_{t^n_{i}-})\label{eq:incr2}. \end{align} Then, it is possible to rewrite the two increments on the right-hand side in terms of increments of two functions on $\R^d$. Indeed: defined the left-continuous and right-differentiable function $\psi(u):=F(t_i^n+u,x^{n}_{t^n_i})$, \eq{incr1} is equal to $$\psi(h^n_i)-\psi(0)=\int_{t^n_i}^{t^n_{i+1}}\hd F(t,x^{n}_{t^n_{i}})\mathrm{d} t,$$ while, defined the function $\phi(u):=F(t_i^n,x^{n,u}_{t^n_i-})$ of class $\C^2(B(0,\y_n),\R)$, where $$\y_n:=\sup\{\abs{x(u)-x(t^n_{i+1})}+\abs{t^n_{i+1}-t^n_i},\,0\leq i\leq m(n)-1,\,u\in[t^n_i,t^n_{i+1})\},$$ \eq{incr2} is equal to $$\phi(\d x^n_i)-\phi(0)=\vd F(t_i^n,x^{n}_{t^n_i-})\cdot\d x^n_i+\frac12\tr\lf\vd^2F(t_i^n,x^{n}_{t^n_i-})\,^t\!(\d x^n_i)\d x^n_i\rg+r^n_i,$$ where $\d x^n_i:=x(t^n_{i+1})-x(t^n_i)$ and $$r^n_i\leq K\abs{\d x^n_i}^2\sup_{u\in B(0,\y_n)}\abs{\vd^2F(t_i^n,x^{n,u}_{t^n_i-})-\vd^2F(t_i^n,x^{n}_{t^n_i-})}.$$ The sum over $i=0,\ldots,m(n)-1$ of \eq{incr1}, by the dominated convergence theorem, converges to $\int_{0}^T\hd F(t,x_t)\mathrm{d} t$. On the other hand, by Lemma 12 in \cite{contf2010} and weak convergence of the Radon measures in \eq{xin}, we have $$\sum_{i=0}^{m(n)-1}\frac12\tr\lf\vd^2F(t_i^n,x^{n}_{t^n_i-})\,^t\!(\d x^n_i)\d x^n_i\rg \limn \int_{0}^T\frac12\tr\lf\vd^2F(t,x_t) \mathrm{d}[x](t)\rg$$ and the sum of the remainders goes to 0. Therefore, the limit of the sum of the first order terms exists and the change of variable formula (see \eq{fif-c} below) holds.
The route to prove the change of variable formula for \cadlag\ paths is much more intricate than in the continuous case, but the idea is the following. We can rewrite the variation of $F$ over \OT\ as before, but now we separate the indexes between two complementary sets $I_1(n),I_2(n)$. Namely: let $\eps>0$ and let $C_2(\eps)$ be the set of jump times such that $\sum_{s\in C_2(\eps)}\abs{\De x(s)}^2<\eps^2$ and $C_1(\eps)$ be its complementary finite set of jump times, denote $I_1(n):=\{i\in\{1,\ldots m(n)\}:\,(t^n_i,t^n_{i+1}]\cap C_1(\eps)\neq0\}$ and $I_2(n):=\{i\in\pi^n: i\notin I_1(n)\}$, then \begin{align*} F(T,x_T)-F(0,x_{0})={}&\Limn\sum_{i\in I_1(n)}\lf F(t_{i+1}^n,x^{n,\De x(t^n_{i+1})}_{t^n_{i+1}-})-F(t_i^n,x^{n,\De x(t^n_{i})}_{t^n_i-})\rg+\\ &{}+\Limn\sum_{i\in I_2(n)}\lf F(t_{i+1}^n,x^{n,\De x(t^n_{i+1})}_{t^n_{i+1}-})-F(t_i^n,x^{n,\De x(t^n_{i})}_{t^n_i-})\rg. \end{align*} The first sum converges, for $n$ going to infinity, to $\sum_{u\in C_1(\eps)}\lf F(u,x_u)-F(u,x_{u-})\rg$, while the increments in the second sum are further decomposed into a horizontal and two vertical variations. After many steps: \begin{align} &F(T,x_T)-F(0,x_{0})=\nonumber\\ ={}&\int_{(0,T]}\hd F(t,x_t)\mathrm{d} t+\int_{(0,T]}\frac12\tr\lf\vd^2F(t,x_t) \mathrm{d}[x](t)\rg+\nonumber\\ &{}+\Limn\sum_{i=0}^{m(n)-1}\vd F_{t_i^n}(x^{n,\De x(t^n_i)}_{t^n_i-},v^n_{t^n_i-})\cdot(x(t_{i+1}^n)-x(t_i^n))+\nonumber\\ &{}+\sum_{u\in C_1(\eps)}\lf F(u,x_u)-F(u,x_{u-})-\vd F(u,x_{u-})\cdot\De x(u)\rg+\a(\eps),\label{eq:sum} \end{align} where $\a(\eps)\leq K(\eps^2+T\eps)$. Finally, the sum in \eq{sum} over $C_1(\eps)$ converges, for $\eps$ going to 0, to the same sum over $(0,T]$ and the formula \eq{fif-d} holds.
It is important to remark that to obtain the change of variable formula on continuous paths it suffices to require the smoothness of the restriction of the \naf\ $F$ to the subspace of continuous stopped paths (see \cite[Theorems 2.27,2.28]{cont-notes}). To this regard, it is defined the class $\Cb(\W_T)$ of \naf s $F$ such that there exists an extension $\tilde F$ of class $\Cb(\L_T)$ that coincides with $F$ if restricted to $\W_T$. Then, the following theorem holds: \begin{theorem}[Theorems 2.29 in \cite{cont-notes}]\label{thm:fif-c}
For any $F\in\Cloc(\W_T)$ and $x\in Q(C([0,T],\R^d),\Pi)$, the limit \begin{equation} \label{eq:int-c}
\Limn\sum_{i=0}^{m(n)-1}\vd F(t_i^n,x^{n}_{t^n_i})(x(t_{i+1}^n)-x(t_i^n)) \end{equation} exists, denoted by $\int_0^T\vd F(t,x_{t})\cdot\mathrm{d}^{\Pi}x$, and \begin{align} \label{eq:fif-c}
F(T,x)={} & F(0,x)+\int_0^T\vd F(t,x_{t})\cdot\mathrm{d}^{\Pi}x+ \\
&{} +\int_0^T\hd F(t,x_{t})\mathrm{d} t+\int_0^T\frac12\tr\lf\vd^2F(t,x_{t})\mathrm{d}[x](t)\rg.\nonumber \end{align} \end{theorem}
As remarked in \cite{contf2010}, the change of variable formula \eq{fif-d} also holds in the case of right-continuous functionals instead of left-continuous, by redefining the pathwise integral \eq{int-d} as $$\Limn\sum_{i=0}^{m(n)-1}\vd F_{t_{i+1}^n}(x^{n}_{t^n_i},v^n_{t^n_i})\cdot(x(t_{i+1}^n)-x(t_i^n))$$ and the stepwise approximation $x^n$ in \eq{wn} as $$x^n:=\sum_{i=0}^{m(n)-1}x(t^n_{i})\ind_{[t^n_i,t^n_{i+1})}+x(T)\ind_{\{T\}}.$$
\chapter{Functional \ito\ Calculus} \label{chap:fic}
The \lq\ito\ calculus\rq\ is a powerful tool at the core of stochastic analysis and lies at the foundation of modern Mathematical Finance. It is a calculus which applies to functions of the current state of a stochastic process, and extends the standard differential calculus to functions of processes with non-smooth paths of infinite variation. However, in many applications, uncertainty affects the current situation even through the whole (past) history of the process and it is necessary to consider functionals, rather than functions, of a stochastic process, i.e. quantities of the form $$F(X_t),\quad\text{where }X_t=\{X(u), u\in[0,t]\}.$$ These ones appear in many financial applications, such as the pricing and hedging of path-dependent options, and in (non-Markovian) stochastic control problems. One framework allowing to deal with functionals of stochastic processes is the Fr\'echet calculus, but many path-dependent quantities intervening in stochastic analysis are not Fr\'echet-differentiable. This instigated the development of a new theoretical framework to deal with functionals of a stochastic process: the Malliavin calculus \cite{malliavin,nualart09}, which is a weak (variational) differential calculus for functionals on the Wiener space. The theory of Malliavin calculus has found many applications in financial mathematics, specifically to problems dealing with path-dependent instruments. However, the Malliavin derivative involves perturbations affecting the whole path (both past and future) of the process. This notion of perturbation is not readily interpretable in applications such as optimal control, or hedging, where the quantities are required to be causal or non-anticipative processes.
In an insightful paper, Bruno Dupire~\cite{dupire}, inspired by methods used by practitioners for the sensitivity analysis of path-dependent derivatives, introduced a new notion of functional derivative, and used it to extend the \ito\ formula to the path-dependent case. Inspired by Dupire's work, Cont and Fourni\'e \cite{ContFournie09a,contf2010,contf2013} developed a rigorous mathematical framework for a path-dependent extension of the \ito\ calculus, the Functional \ito\ Calculus~\cite{contf2013}, as well as a purely pathwise functional calculus~\cite{contf2010} (see \chap{pfc}), proving the pathwise nature of some of the results obtained in the probabilistic framework.
The idea is to control the variations of a functional along a path by controlling its sensitivity to horizontal and vertical perturbations of the path, by defining functional derivatives corresponding to infinitesimal versions of these perturbations. These tools led to \begin{itemize}\item a new class of {\bf ``path-dependent PDEs''} on the space of \cadlag\ paths $D([0,T],R^d)$, extending the Kolmogorov equations to a non-Markovian setting, \item a {\bf universal hedging formula} and a {\bf universal pricing equation} for path-dependent options.\end{itemize}
In this chapter we develop the key concepts and main results of the Functional Ito calculus, following \citet{contf2013,cont-notes}.
\section{Functional \ito\ formulae} \sectionmark{The Functional \ito\ formula} \label{sec:fif}
The change of variable formula \eq{fif-d} implies as a corollary the extension of the classical \ito\ formula to the case of \naf s, called the \emph{functional \ito\ formula}. This holds for very general stochastic processes as Dirichlet process, in particular for semimartingales. We report here the results obtained with respect to \cadlag\ and continuous semimartingales, in which case the pathwise integral \eq{int-d} coincides almost surely with the stochastic integral. The following theorems correspond to Proposition 6 in \cite{contf2010} and Theorem 4.1 in \cite{contf2013}, respectively.
\begin{theorem}[Functional \ito\ formula: \cadlag\ case]\label{thm:fif-sm} Let $X$ be a $\R^d$-valued semimartingale on $(\O,\F,\PP,\FF)$ and $F\in\Cloc(\L_T)$, then, for all $t\in[0,T)$, \begin{align*}
F(t,X_t)={} & F(0,X_{0})+\int_{(0,t]}\vd F(u,X_{u-})\cdot\mathrm{d} X(u)+ \\
&{} +\int_{(0,t]}\hd F(u,X_{u-})\mathrm{d} u+\int_{(0,t]}\frac12\tr\lf\vd^2F(u,X_{u-}) \mathrm{d}[X]^c(u)\rg \\
&{} +\sum_{u\in(0,t]}\lf F(u,X_{u})-F(u,X_{u-})-\vd F(u,X_{u-})\cdot\De X(u)\rg, \end{align*} $\PP$-almost surely. In particular, $(F(t,X_t), t\in[0,T])$ is a semimartingale. \end{theorem}
\begin{theorem}[Functional \ito\ formula: continuous case]\label{thm:fif-csm} Let $X$ be a $\R^d$-valued continuous semimartingale on $(\O,\F,\PP,\FF)$ and $F\in\Cloc(\W_T)$, then, for all $t\in[0,T)$, \begin{align}\label{eq:fif-csm}
F(t,X_t)={} & F(0,X_{0})+\int_0^t\vd F(u,X_{u})\cdot\mathrm{d} X(u)+ \\\nonumber
&{} +\int_0^t\hd F(u,X_{u})\mathrm{d} u+\int_0^t\frac12\tr\lf\vd^2F(u,X_{u})\mathrm{d}[X](u)\rg \end{align} $\PP$-almost surely. In particular, $(F(t,X_t), t\in[0,T])$ is a semimartingale. \end{theorem}
Although the functional \ito\ formulae are a consequence of the stronger pathwise change of variable formulae, \citet{contf2013,cont-notes} also provided a direct probabilistic proof for the functional \ito\ formula for continuous semimartingales, based on the classical \ito\ formula. The proof follows the lines of the proof to \thm{fif-d} in the case of continuous paths, first considering the case of $X$ having values in a compact set $K$, $\PP$-almost surely, then going to the general case. The $i$-th increment of $F(t,X_t)$ along the $n^{\mathrm{th}}$ partition $\pi_n$ is decomposed as: \begin{align*} F(t_{i+1}^n,X^{n}_{t^n_{i+1}-})-F(t_{i}^n,X^{n}_{t^n_{i}-})={}&F(t_{i+1}^n,X^{n}_{t^n_i})-F(t_{i}^n,X^{n}_{t^n_{i}})\\ &{}+F(t_{i}^n,X^{n}_{t^n_i})-F(t_{i}^n,X^{n}_{t^n_{i}-}). \end{align*}
The horizontal increment is treated analogously to the pathwise proof, while for the vertical increment, the classical \ito\ formula is applied to the partial map, which is a $\C^2$-function of the continuous $(\F_{t^n_i+s})_{s\geq0}$-semimartingale $(X(t^n_i+s)-X(t^n_i),\,s\geq0)$. The sum of the increments of the functionals along $\pi_n$ gives: \begin{align*}
F(t,X^n_t)-F(0,X^n_0)={}&\int_0^t\hd F(u,X^n_{i(u)})\mathrm{d} u\\ &{}+\frac12\int_0^t\tr\lf\vd^2F(t^n_{\bar k(u,n)},X_{t^n_{\bar k(u,n)}-}^{n,X(u)-X(t^n_{\bar k(u,n)})})\mathrm{d}[X](u)\rg\\ &{}+\int_0^t\vd F(t^n_{\bar k(u,n)},X_{t^n_{\bar k(u,n)}-}^{n,X(u)-X(t^n_{\bar k(u,n)})})\cdot\mathrm{d} X(u). \end{align*} Formula \eq{fif-csm} then follows by applying the dominated convergence theorem to the Stieltjes integrals on the first two lines and the dominated convergence theorem for stochastic integrals to the stochastic integral on the third line. As for the general case, it suffices to take a sequence of increasing compact sets $(K_n)_{n\geq0}$, $\cup_{n\geq0}K_n=\R^d$, define the stopping times $\bar\t_k:=\inf\{s<t,\,X_s\notin K_k\}\wedge t$, and apply the previous result to the stopped process $(X_{t\wedge\bar\t_k})$. Finally, taking the limit for $k$ going to infinity completes the proof.
As an immediate corollary, if $X$ is a local martingale, for any $F\in\Cb$, $F(X_t,A_t)$ has finite variation if and only if $\vd F_t=0$ $\mathrm{d}[X](t)\times\mathrm{d}\PP$-almost everywhere.
\section{Weak functional calculus and martingale representation} \sectionmark{Weak functional calculus} \label{sec:weak}
\citet{contf2013} extended the pathwise theory to a weak functional calculus that can be applied to all square-integrable martingales adapted to the filtration $\FF^X$ generated by a given $\R^d$-valued square-integrable \ito\ process $X$. \citet{cont-notes} carries the extension further, that is to all square-integrable semimartingales. Below are the main results on the functional \ito\ calculus obtained in \cite{contf2013,cont-notes}.
Let $X$ be the coordinate process on the canonical space $\DT$ of $\R^d$-valued \cadlag\ processes and $\PP$ be a probability measure under which $X$ is a square-integrable semimartingale such that \begin{equation} \mathrm{d} [X](t)=\int_0^tA(u)\mathrm{d} u \end{equation} for some $\S^d_+$-valued \cadlag\ process $A$ satisfying \begin{equation}\label{eq:Anon-deg} \mathrm{det}(A(t))\neq0\text{ for almost every }t\in[0,T],\ \PP\text{-almost surely}. \end{equation} Denote by $\FF=\Ft$ the filtration $(\F^X_{t+})_{t\in[0,T]}$ after $\PP$-augmentation. Then define \begin{equation}\label{eq:CX} \Cloc(X):=\{Y:\;\exists F\in\Cloc,\;Y(t)=F(t,X_t)\; \mathrm{d} t\times\mathrm{d}\PP\text{-a.e.}\}. \end{equation} Thanks to the assumption \eq{Anon-deg}, for any adapted process $Y\in\Cb(X)$, the \emph{vertical derivative of $Y$ with respect to $X$}, $\nabla_XY(t)$, is well defined as $\nabla_XY(t)=\vd F(t,X_t)$ where $F$ satisfies \eq{CX}, and it is unique up to an evanescent set independently of the choice of $F\in\Cb$ in the representation \eq{CX}.
\thm{fif-sm} leads to the following representation for smooth local martingales. \begin{proposition}[Prop. 4.3 in \cite{cont-notes}]
Let $Y\in\Cb(X)$ be a local martingale, then $$Y(T)=Y(0)+\int_0^T\nabla_XY(t)\cdot\mathrm{d} X(t).$$ \end{proposition} On the other hand, under specific assumptions on $X$, this leads to an explicit martingale representation formula. \begin{proposition}[Prop. 4.3 in \cite{cont-notes}]\label{prop:mg-repr} If $X$ is a square-integrable $\PP$-Brownian martingale, for any square integrable $\FF$-martingale $Y\in\Cloc(X)$, then $\nabla_XY$ is the unique process in the Hilbert space
$$\LL(X):=\left\{\phi\,\text{progressively-measurable},\;\EE^\PP\left[\int_0^T|\phi(t)|^2\mathrm{d}[X](t)\right]<\infty\right\},$$
endowed with the norm $\displaystyle\norm{\phi}_{\LL(X)}:=\EE^\PP\left[\int_0^T|\phi(t)|^2\mathrm{d}[X](t)\right]^{\frac12}$, such that $$Y(T)=Y(0)+\int_0^T\nabla_XY(t)\cdot\mathrm{d} X(t)\quad \PP\text{-a.s.}$$ \end{proposition}
This is used in \cite{contf2013} to extend the domain of the vertical derivative operator $\nabla_X$ to the space of square-integrable $\FF$-martingales $\M^2(X)$, by a density argument.
On the space of smooth square-integrable martingales, $\Cb(X)\cap\M^2(X)$, which is dense in $\M^2(X)$, an integration-by-parts formula holds: for any $Y,Z\in \Cb(X)\cap\M^2(X)$, $$\EE[Y(T)Z(T)]=\EE\left[\int_0^TY(T)Z(T)\mathrm{d}[X](t)\right].$$ By this and by density of $\{\nabla_XY,\,Y\in\Cloc(X)\}$ in $\LL(X)$, the extension of the vertical derivative operator follows. \begin{theorem}[Theorem 5.9 in \cite{contf2013}]\label{th:weakvd}
The operator $\nabla_X:\Cb(X)\cap\M^2(X)\rightarrow\LL(X)$ admits a closure in $\M^2(X)$. Its closure is a bijective isometry \begin{equation}\label{eq:vdM2} \nabla_X:\M^2(X)\rightarrow\LL(X), \quad \int_0^\cdot\phi(t)\mathrm{d} X(t)\mapsto\phi, \end{equation} characterized by the property that, for any $Y\in\M^2$, $\nabla_X Y$ is the unique element of $\LL(X)$ such that $$\forall Z\in \Cb(X)\cap\M^2(X),\quad \EE[Y(T)Z(T)]=\EE\left[\int_0^T\nabla_XY(t)\nabla_XZ(t)\mathrm{d}[X](t)\right].$$ In particular $\nabla_X$ is the adjoint of the \ito\ stochastic integral $$I_X:\LL(X)\rightarrow\M^2(X), \quad \phi\mapsto\int_0^\cdot\phi(t)\cdot\mathrm{d} X(t),$$ in the following sense: for all $\phi\in\LL(X)$ and for all $Y\in\M^2(X)$, $$\EE\left[Y(T)\int_0^T\phi(t)\cdot\mathrm{d} X(t)\right]=\EE\left[\int_0^T\nabla_XY(T)\phi(t)\mathrm{d}[X](t)\right].$$ \end{theorem}
Thus, for any square-integrable $\FF$-martingale $Y$, the following martingale representation formula holds: \begin{equation}\label{eq:mgrepr} Y(T)=Y(0)+\int_0^T\nabla_XY(t)\cdot\mathrm{d} X(t), \quad \PP\text{-a.s.} \end{equation}
Then, denote by $A^2(\FF)$ the space of $\FF$-predictable absolutely continuous processes $H=H(0)+\int_0^\cdot h(u)\mathrm{d} u$ with finite variation, such that $$\norm{H}^2_{\A^2}:=\EE^{\PP}\left[ \abs{H(0)}^2+ \int_0^T\abs{h(u)}^2\mathrm{d} u\right]<\infty$$ and by $\S^{1,2}(X)$ the space of square-integrable $FF$-adapted special semimartingales, $\S^{1,2}(X)=\M^2(X)\oplus\A^2(\FF)$, equipped with the norm $\norm{\cdot}_{1,2}$ defined by $$\norm{S}_{1,2}^2:=\EE^\PP\left[[M](T)\right]+\norm{H}^2_{\A^2}, \quad S\in\S^{1,2}(X),$$ where $S=M+H$ is the unique decomposition of $S$ such that $M\in\M^2(X)$, $M(0)=0$ and $H\in\A^2(\FF)$, $H(0)=S(0)$.
The vertical derivative operator admits a unique continuous extension to $\S^{1,2}(X)$ such that its restriction to $\M^2(X)$ coincides with the bijective isometry in \eq{vdM2} and it is null if restricted to $\A^2(\FF)$.
By iterating this construction it is possible to define a series of \lq Sobolev\rq\ spaces $\S^{k,2}(X)$ on which the vertical derivative of order $k$, $\nabla^k_X$ is defined as a continuous operator. We restrict our attention to the space of order 2: $$\S^{2,2}(X):=\{Y\in\S^{1,2}(X):\;\nabla_X Y\in\S^{1,2}(X)\},$$ equipped with the norm $\norm{\cdot}_{2,2}^2$ defined by $$\norm{Y}_{2,2}^2=\norm{H}^2_{\A^2}+\norm{\nabla_XY}_{\LL(X)}+\norm{\nabla^2_{X}Y}_{\LL(X)},\quad Y\in\S^{2,2}(X).$$
Note that the second vertical derivative of a process $Y\in\S^{2,2}(X)$ has values in $\R^d\times\R^d$ but it needs not be a symmetric matrix, differently from the (pathwise) second vertical derivative of a smooth functional $F\in\C^{1,2}_b(\L_T)$.
The power of this construction is that it is very general, e.g. it applies to functionals with no regularity, and it makes possible to derive a \lq weak functional \ito\ formula\rq\ involving vertical derivatives of square-integrable processes and a weak horizontal derivative defined as follow. For any $S\in S^{2,2}(X)$, the weak horizontal derivative of $S$ is the unique $\FF$-adapted process $\hd S$ such that: for all $t\in[0,T]$ \begin{equation}\label{eq:weak-hd} \quad \int_0^t\hd S(u)\mathrm{d} u=S(t)-S(0)-\int_0^t\nabla_XS\mathrm{d} X-\frac12\int_0^t\tr(\nabla^2_XS(u)\mathrm{d}[X](u)) \end{equation} and $\EE^\PP\left[\int_0^T\abs{\hd S(t)}^2\mathrm{d} t\right]<\infty$. \begin{proposition}[Proposition 4.18 in \cite{cont-notes}] For any $S\in S^{2,2}(X)$, the following \lq weak functional \ito\ formula\rq\ holds $\mathrm{d} t\times\mathrm{d}\PP$-almost everywhere: \begin{equation}\label{eq:weak-ito} S(t)=S(0)+\int_0^t\nabla_XS\mathrm{d} X+\frac12\int_0^t\tr(\nabla^2_XS\mathrm{d}[X])+\int_0^t\hd S(u)\mathrm{d} u. \end{equation} \end{proposition}
\section{Functional Kolmogorov equations} \label{sec:kolmogorov}
Another important result in \cite{cont-notes} is the characterization of smooth harmonic functionals as solutions of functional Kolmogorov equations. Specifically, a \naf\ $F:\L_T\to\R$ is called \emph{$\PP$-harmonic} if $F(\cdot,X_\cdot)$ is a $\PP$-local martingale, where $X$ is the unique weak solution to the path-dependent stochastic differential equation $$\mathrm{d} X(t)=b(t,X_t)\mathrm{d} t+\s(t,X_t)\mathrm{d} W(t),\quad X(0)=X_0,$$ where $b,\s$ are \naf s with enough regularity and $W$ is a $d$-dimensional Brownian motion on $(\DT,\F_T,\PP)$.
\begin{proposition}[Theorem 5.6 in \cite{cont-notes}]\label{prop:harmonic}
If $F\in\Cb(\W_T)$, $\hd F\in\CC^{0,0}_l(\W_T)$, then $F$ is a $\PP$-harmonic functional if and only if it satisfies \begin{equation}\label{eq:FPDE-cont} \hd F(t,\w_t)+b(t,\w_t)\vd F(t,\w_t)+\frac12\tr\lf\vd^2F(t,\w_t)\s(t,\w_t){}^t\!\s(t,\w_t)\rg=0 \end{equation} for all $t\in[0,T]$ and all $\w\in\supp(X)$, where \begin{align} \label{eq:supp}
\supp(X):=\big\{&\w\in C([0,T],\R^d):\;\PP(X_T\in V)>0\\ &\forall \text{ neighborhood $V$ of }\w\text{ in }\lf C([0,T],\R^d),\norm{\cdot}_\infty\rg\big\},\nonumber \end{align}
is the topological support of $(X,\PP)$ in $(C([0,T],\R^d),\norm{\cdot}_\infty)$. \end{proposition}
Analogously to classical finite-dimensional parabolic PDEs, we can introduce the notions of sub-solution and super-solution of the functional (or path-dependent) PDE \eq{FPDE-cont}, for which \cite{cont-notes} proved a comparison principle allowing to state uniqueness of solutions.
\begin{definition}
$F\in\CC^{1,2}(\L_T)$ is called a \emph{sub-solution} (respectively \emph{super-solution}) of \eq{FPDE-cont} on a domain $U\subset\L_T$ if, for all $(t,\w)\in U$,
\begin{equation} \label{eq:sub}
\hd F(t,\w_t)+b(t,\w_t)\vd F(t,\w_t)+\frac12\tr\lf\vd^2F(t,\w_t)\s(t,\w_t){}^t\!\s(t,\w_t)\rg\geq0
\end{equation} (resp. $\hd F(t,\w_t)+b(t,\w_t)\vd F(t,\w_t)+\frac12\tr\lf\vd^2F(t,\w_t)\s(t,\w_t){}^t\!\s(t,\w_t)\rg\leq0$). \end{definition}
\begin{theorem}[Comparison principle (Theorem 5.11 in \cite{cont-notes})]
Let $\under F\in\CC^{1,2}(\L_T)$ and $\over F\in\CC^{1,2}(\L_T)$ be respectively a sub-solution and a super-solution of \eq{FPDE-cont}, such that $$\bea{c}\forall\w\in C([0,T,\R^d),\quad \under F(T,\w)\leq\over F(T.\w),\\
\EE^\PP\left[ \sup_{t\in[0,T]}|\under F(t,X_t)-\over F(t,X_t)|\right]<\infty. \end{array}$$ Then, $$\forall t\in[0,T),\,\forall\w\in\supp(X),\quad\under F(t,X_t)\leq\over F(t,X_t).$$ \end{theorem} This leads to a uniqueness result on the topological support of $X$ for $\PP$-uniformly integrable solutions of the functional Kolmogorov equation.
\begin{theorem}[Uniqueness of solutions (Theorem 5.12 in \cite{cont-notes})]
Let $H:(C([0,T],\R^d),\norm{\cdot}_\infty)\to\R$ be continuous and let $F^1,F^2\in\Cb(\L_T)$ be solutions of \eq{FPDE-cont} verifying $$\bea{c}\forall\w\in C([0,T,\R^d),\quad F^1(T,\w)=F^2(T.\w)=H(\w_T),\\
\EE^\PP\left[ \sup_{t\in[0,T]}|F^1(t,X_t)-F^2(t,X_t)|\right]<\infty. \end{array}$$ Then: $$\forall (t,\w)\in[0,T]\times\supp(X),\quad F^1(t,\w)=F^2(t,\w).$$ \end{theorem}
The uniqueness result, together with the representation of $\PP$-harmonic functionals as solutions of a functional Kolmogorov equation, leads to a Feynman-Kac formula for \naf s.
\begin{theorem}[Feynman-Kac, path-dependent (Theorem 5.13 in \cite{cont-notes})]
Let $H:(C([0,T],\R^d),\norm{\cdot}_\infty)\to\R$ be continuous and let $F\in\Cb(\L_T)$ be a solution of \eq{FPDE-cont} verifying $F(T,\w)=H(\w_T)$ for all $\w\in C([0,T,\R^d)$ and $\EE^\PP\left[\sup_{t\in[0,T]}|F(t,X_t)|\right]<\infty$. Then:
$$\quad F(t,\w)=\EE^\PP[H(X_T)|\F_t]\quad\mathrm{d} t\times\mathrm{d}\PP\text{-a.s.}$$ \end{theorem}
\subsection{Universal pricing and hedging equations} \label{sec:hedgeprice}
Straightforward applications to the pricing and hedging of path-dependent derivatives then follow from the representation of $\PP$-harmonic functionals.
Now we consider the point of view of a market agent and we suppose that the asset price process $S$ is modeled as the coordinate process on the path space $\DT$, and it is a square-integrable martingale under a pricing measure $\PP$, $$\mathrm{d} S(t)=\s(t,S_t)\mathrm{d} W(t).$$ Let $H:\DT\to\R$ be the payoff functional of a path-dependent derivative that the agent wants to sell. The price of such derivative at time $t$ is computed as $$Y(t)=\EE^\PP\left[H(S_T)\mid\F_t\right].$$
The following proposition is a direct corollary of \prop{mg-repr}. \begin{proposition}[Universal hedging formula] If $\EE^\PP\left[\abs{H(S_T)}^2\right]<\infty$ and if the price process has a smooth functional representation of $S$, that is $Y\in\Cloc(S)$, then: \begin{align} \PP\text{-a.s.}\quad H&=\EE^\PP\left[H(S_T)\mid\F_t\right]+\int_t^T\nabla_S Y(u)\cdot\mathrm{d} S\label{eq:implicithedge}\\ &=\EE^\PP\left[H(S_T)\mid\F_t\right]+\int_t^T\vd F(u,S_u)\cdot\mathrm{d} S,\label{eq:univ-hedge} \end{align} where $Y(t)=F(t,S_t)$ $\mathrm{d} t\times\mathrm{d}\PP$-almost everywhere and $\vd F(\cdot,S_\cdot)$ is the unique (up to indistinguishable processes) asset position process of the hedging strategy for $H$. \end{proposition} We refer to the equation \eq{univ-hedge} as the \lq universal hedging formula\rq, because it gives an explicit representation of the hedging strategy for a path-dependent option $H$. The only dependence on the model lies in the computation of the price $Y$. \begin{remark} If the price process does not have a smooth functional representation of $S$, but the payoff functional still satisfies $\EE^\PP\left[\abs{H(S_T)}^2\right]<\infty$, then the equation \eq{implicithedge} still holds. \end{remark} In this case, the hedging strategy is not given explicitly, being the vertical derivative of a square-integrable martingale, but it can be uniformly approximated by regular functionals that are the vertical derivatives of smooth \naf s. Namely: there exists a sequence of smooth functionals $$\{F^n\in\Cb(\L_T),\,F^n(\cdot,S_\cdot)\in\M^2(S),\,\norm{F^n(\cdot,S_\cdot)}_2<\infty\}_{n\geq1},$$ where $$\norm{Y}_2:=\EE^\PP\left[\abs{Y(T)}^2\right]^{\frac12}<\infty,\quad Y\in\M^2(S),$$ such that $$\norm{F^n(\cdot,S_\cdot)-Y}_2\limn0\quad \text{and}\quad \norm{\nabla_S Y-\nabla_S F^n(\cdot,S_\cdot)}_{\LL(S)}\limn0.$$
For example, \citet{contlu} compute an explicit approximation for the integrand in the representation \eq{implicithedge}, which cannot be itself computed through pathwise perturbations. They allow the underlying process $X$ to be the strong solution of a path-dependent stochastic differential equation with non-anticipative Lipschitz-continuous and non-degenerate coefficients, then they consider the Euler-Maruyama scheme of such SDE. They proved the strong convergence of the Euler-Maruyama approximation to the original process. By assuming that the payoff functional $H:(\DT,\norm{\cdot}_\infty)\to\R$ is continuous with polynomial growth, they are able to define a sequence $\{F_n\}_{n\geq1}$ of smooth functionals $F_n\in\CC^{1,\infty}(\L_T)$ that approximate the pricing functional and provide thus a smooth functional approximation sequence $\{\vd F_n(\cdot,S_\cdot)\}_{n\geq1}$ for the hedging process $\nabla_SY$.
Another application is derived from \prop{harmonic} for the pricing of path-dependent derivatives. \begin{proposition}[Universal pricing equation]\label{prop:universalprice} If there exists a smooth functional representation of the price process $Y$ for $H$, i.e.
$$ \exists F\in\Cb(\W_T):\quad F(t,S_t)=\EE^{\PP}[H(S_T)|\F_t]\quad \mathrm{d} t\times \mathrm{d}\PP\text{-a.s.},$$ such that $\hd F\in\CC^{0,0}_l(\W_T)$, then the following path-dependent partial differential equation holds on the topological support of $S$ in $\lf C([0,T],\R^d),\norm{\cdot}_\infty\rg$ for all $t\in[0,T]$: \begin{equation}\label{eq:pricing} \hd F(t,\w_t)+\frac12\tr\lf\vd^2F(t,\w_t)\s(t,\w_t)\,{}^t\!\s(t,\w_t)\rg=0. \end{equation} \end{proposition}
\begin{remark}
If there exists a smooth functional representation of the price process $Y$ for $H$, but the horizontal derivative is not left-continuous, then the pricing equation \eq{pricing} cannot hold on the whole topological support of $S$ in $\lf C([0,T],\R^d),\norm{\cdot}_\infty\rg$, but it still holds for $\PP$-almost every $\w\in C([0,T],\R^d)$. \end{remark}
\section{Path-dependent PDEs and BSDEs} \label{sec:PPDE}
In the Markovian setting, there is a well-known relation between backward stochastic differential equations (BSDEs) and semi-linear parabolic PDEs, via the so-called nonlinear Feynman-Kac formula introduced by \citet{pardoux-peng92} (see also \citet{pardoux-peng90} for the introduction to BSDEs and \citet{elkaroui-peng-quenez} for a comprehensive guide on BSDEs and their application in finance). This relation can be extended to a non-Markovian setting using the functional \ito\ calculus.
Consider the following forward-backward stochastic differential equation (FBSDE) with path-dependent coefficients: \begin{eqnarray}
X(t)&=&x+\int_0^tb(u,X_{u})\mathrm{d} u+\int_0^t\s(u,X_{u})\cdot\mathrm{d} W(u)\label{eq:FBSDE1}\\
Y(t)&=&H(X_T)+\int_t^Tf(u,X_{u},Y(u),Z(u))\mathrm{d} u-\int_t^TZ(u)\cdot\mathrm{d} X(u)\label{eq:FBSDE2}, \end{eqnarray} where $W$ is a $d$-dimensional Brownian motion on $(D([0,T],\R^d),\PP)$, $\FF=\Ft$ is the $\PP$-augmented natural filtration of the coordinate process $X$, the terminal value is a square-integrable $\F_T$-adapted random variable, i.e. $H\in L^2(\O,\F_T,\PP)$, and the coefficients $$b:\W_T\to\R^d,\ \s:\W_T\to\R^{d\times d},\ f:\W_T\times\R\times\R^d\to\R$$
are assumed to satisfy the standard assumptions that guarantee that the process $M$, $M(t)=\int_0^t\s(u,X_{u})\cdot\mathrm{d} W(u)$ is a square-integrable martingale, and the forward equation \eq{FBSDE1} has a unique strong solution $X$ satisfying $\EE^\PP\left[\sup_{t\in[0,T]}|X(t)|^2\right]<\infty$. Moreover, assuming also $det\lf\s(t,X_{t-},X(t))\rg\neq0$ $\mathrm{d} t\times\mathrm{d}\PP$-almost surely, they guarantee that the FBSDE \eq{FBSDE1}-\eq{FBSDE2} has a unique solution $(Y,Z)\in\S^{1,2}(M)\times\L^2(M)$ such that $\EE^\PP\left[\sup_{t\in[0,T]}|Y(t)|^2\right]<\infty$ and $Z=\nabla_MY$.
The following is the extension of the non-linear Feynman-Kac formula of \cite{pardoux-peng92} to the non-Markovian setting.
\begin{theorem}[Theorem 5.14 in \cite{cont-notes}]\label{thm:FK-cont} Let $F\in\Cloc(\W_T)$ be a solution of the path-dependent PDE $$\begin{cases}
\hd F(t,\w)+f(t,\w_t,F(t,\w)\vd F(t,\w))+\frac12\tr(\s(t,\w)\,^t\!\s(t,\w)\vd^2F(t,\w))=0\\
F(T,\w)=H(\w_T) \end{cases}$$ for $(t,\w)\in[0,T]\times\supp(X)$. Then, the pair $(Y,Z)=(F(\cdot,X_\cdot),\vd F(\cdot,X_\cdot))$ solves the FBSDE \eq{FBSDE1}-\eq{FBSDE2}. \end{theorem} Together with the standard comparison theorem for BSDEs, \thm{FK-cont} provides a comparison principle for functional Kolmogorov equations and uniqueness of the solution.
To prove existence of a solution to \eq{FPDE-cont}, additional regularity of the coefficients is needed. A result in this direction is provided by \citet{peng}, using BSDEs where the forward process is a Brownian motion. \citet{peng} considers the following backward stochastic differential equation: \begin{equation}\label{eq:peng} Y^{(t,\g)}(s)=H(W^{(t,\g)}_T)+\int_s^Tf(W^{(t,\g)}_u,Y^{(t,\g)}(u),Z^{(t,\g)}(u))\mathrm{d} u-\int_s^TZ^{(t,\g)}(u)\mathrm{d} W(u), \end{equation} where $W$ is the coordinate process on the Wiener space $(C([0,T],\R^d),\PP)$ and, for all $(t,\g)\in\L_T$, $W^{(t,\g)}=\g\ind_{[0,t)}+(\g(t)+W-W(t))\ind_{[t,T]}$. Note that the notation has been rearranged to be consistent with the presentation in this thesis.
The BSDE \eq{peng} has a unique solution $(Y^{(t,\g)},Z^{(t,\g)})\in S^2([t,T])\times M^2([t,T])$, where $M^2([t,T])$ and $S^2([t,T])$ denote respectively the space of $\R^m$-valued processes $X$ such that $X\in L^2([t,T]\times\O,\mathrm{d} t\times\mathrm{d}\PP)$ and $\R^{m\times d}$-valued processes $X$ such that $\EE^\PP[\sup_{u\in[t,T]}|X(u)|^2]<\infty$, both adapted to the completion of the filtration generated by $\{W(u)-W(t),\,u\in[t,T]\}$, under the following assumptions on the coefficients: \begin{enumerate} \item $H:\L_T\to\R^m$ satisfies
\begin{enumerate}
\item $\psi^{(t,\g)}:\R^d\to\R^m,\,e\mapsto H(\g+e\ind_{[t,T]})$ is twice differentiable in 0 for all $(t,\g)\in[0,T]\times D([0,T],\R^d)$,
\item $\abs{H(\g_T)-H(\g'_T)}\leq C(1+\norm{\g_T}_\infty^k+\norm{\g'_T}_\infty^k)\norm{\g_T-\g'_T}_\infty$ for all $\g,\g'\in D([0,T],\R^d)$,
\item $\partial^j_e\psi^{(t,\g)}(0)-\partial^j_e\psi^{(t',\g')}(0)\leq C(1+\norm{\g_T}_\infty^k+\norm{\g'_T}_\infty^k)(\abs{t-t'}+\norm{\g_T-\g'_T}_\infty)$ for all $\g,\g'\in D([0,T],\R^d)$, $t,t'\in[0,T]$, $j=1,2$;
\end{enumerate} \item $f:\L_T\times\R^m\times\R^{m\times d}\to\R^m$ is continuous; for any $(t,\g)\in\L_T$ and $s\in[0,t]$ $(x,y,z)\mapsto f(t,\g_t+x\ind_{[s,T]},y,z)$ is of class $C^3(\R^d\times\R^m\times\R^{m\times d},\R^m)$ with first-order partial derivatives and second-order partial derivatives with respect to $(y,z)$ uniformly bounded, and all partial derivatives up to order three growing at most as a polynomial at infinity; for any $(t,y,z)$, $\g\mapsto f(t,\g_t,y,z)$ satisfies assumptions 1(a),1(b),1(c) replacing $H$ with $f(t,\cdot_t,y,z)$, $\g\mapsto \partial_yf(t,\g_t,y,z),\partial_zf(t,\g_t,y,z)$ satisfy assumptions 1(a),1(b) and 1(c) with only $j=1$, and $$\g\mapsto \partial_{yy}f(t,\g_t,y,z),\partial_{zz}f(t,\g_t,y,z),\partial_{yz}f(t,\g_t,y,z)$$ satisfy the assumptions 1(a),1(b). \end{enumerate} The functional Kolmogorov equation associated is the following: for all $\g\in D([0,T],\R^d)$ and $t\in[0,T]$, \begin{equation}\label{eq:FPDE-peng} \begin{cases}\hd F(t,\g_t)+\frac12\tr(\vd^2F(t,\g_t))+f(t,\g_t,F(t,\g_t),\vd F(t,\g_t))=0,\\F(T,\g_T)=H(\g_T) \end{cases} \end{equation} First, by the functional \ito\ formula, they obtain the analogue of \thm{FK-cont}, then they prove the converse result: the \naf\ $F$ defined by $F(t,\g)=Y^{(t,\g_t)}(t)$ is the unique $\CC^{1,2}(\L_T)$-solution of the functional Kolmogorov equation \eq{FPDE-peng}. This significant result is achieved based on the theory of BSDEs.
Another approach to study the connection between PDEs and SDEs in the path-dependent case is provided by \citet{flandoli-zanco}, who reformulate the problem into an infinite-dimensional setting on Banach spaces, where solutions of the SDE are intended in the mild sense and the Kolmogorov equations are defined appropriately. However, in the infinite-dimensional framework, the regularity requirements are very strong, involving at least Fr\'echet differentiability.
\subsection{Weak and viscosity solutions of path-dependent PDEs} \label{sec:viscosity}
The results seen above in \Sec{kolmogorov} require a regularity that is often difficult to prove and classical solutions of the above path-dependent PDEs may fail to exist. To find a way around this issue, more general notions of solution have been proposed, analogously to the Markovian case where weak solutions of PDEs are considered or viscosity solutions are used to link solutions of BSDEs to the associated PDE.
\citet{cont-notes} proposed the following notion of \emph{weak solution}, using the weak functional \ito\ calculus presented in \Sec{weak} and generalizing \prop{harmonic}.
Consider the stochastic differential equation \eq{FBSDE1} with path-dependent coefficients such that $X$ is the unique strong solution and $M$ is a square-integrable martingale.
Denote by $\WW^{1,2}(\PP)$ the Sobolev space of $\mathrm{d} t\times\mathrm{d}\PP$-equivalence classes of \naf s $F:\L_T\to\R$ such that the process $S=F(\cdot,X_\cdot)$ belongs to $\S^{1,2}(M)$, equipped with the norm $\norm{\cdot}_{\WW^{1,2}}$ defined by \begin{align*} \norm{F}^2_{\WW^{1,2}}:={}&\norm{F(\cdot,X_\cdot)}_{1,2}^2\\
={}&\EE^\PP\left[|F(0,X_0)|^2+\int_0^T\tr(\nabla_M F(t,X_t)^t\nabla_M F(t,X_t)\mathrm{d}[M](t))\right.\\
&\left.\quad{}+\int_0^T|v(t)|^2\mathrm{d} t\right], \end{align*} where $F(t,X_t)=V(t)+\int_0^t\nabla_MS\mathrm{d} M$ and $V(t)=S(0)+\int_0^tv(u)\mathrm{d} u$, $V\in A^2(\FF)$. Equivalently, $\WW^{1,2}(\PP)$ can be defined as the completion of $(\Cb(\L_T),\norm{\cdot}_{\WW^{1,2}})$.
Note that, in general, it is not possible to define for $F\in\WW^{1,2}(\PP)$ the $\FF$-adapted process $\hd F(\cdot,X_\cdot)$, because it requires $F\in\S^{2,2}(M)$. On the other hand, the finite-variation part of $S$ belongs to the Sobolev space $H^1([0,T])$, so the process $U$ defined by $$U(t):=F(T,X_T)- F(t,X_t)-\int_t^T\nabla_MF(u,X_u)\mathrm{d} M(u),\quad t\in[0,T],$$ has paths in $H^1([0,T])$, almost surely. By integration by parts, for all $\Phi\in A^2(\FF)$, $\Phi(t)=\int_0^t\phi(u)\mathrm{d} u$ for $t\in[0,T]$, $$\bea{l}\int_0^T\Phi(t)\frac\mathrm{d}{\mathrm{d} t}\lf F(t,X_t)-\int_0^t\nabla_MF(u,X_u)\mathrm{d} M(u)\rg \mathrm{d} t\\ =\int_0^T\Phi(t)\lf-\frac\mathrm{d}{\mathrm{d} t}U(t)\rg\mathrm{d} t\\ =\int_0^T\phi(t)\lf F(T,X_T)-F(t,X_t)-\int_t^T\nabla_MF(u,X_u)\mathrm{d} M(u)\rg \mathrm{d} t. \end{array}$$
Thus, the following notion of weak solution is well defined. \begin{definition}
A \naf\ $F\in\WW^{1,2}(\PP)$ is called a \emph{weak solution} of the path-dependent PDE \eq{FPDE-cont} on $\supp(X)$ with terminal condition $H(X_T)\in L^2(\O,\PP)$ if, for all $\phi\in L^2([0,T]\times\O,\mathrm{d} t\times\mathrm{d}\PP)$, it satisfies \begin{equation}\label{eq:weak} \begin{cases} \EE^\PP\left[\int_0^T\phi(t)\lf H(X_T)-F(t,X_t)-\int_t^T\nabla_MF(u,X_u)\mathrm{d} M(u)\rg \mathrm{d} t\right]=0,\\ F(T,X_T)=H(X_T). \end{cases} \end{equation} \end{definition}
Using the tools provided by the functional \ito\ calculus presented in this chapter, different notions of viscosity solutions have been recently proposed, depending on the path-dependent partial differential equation considered. \citet{ektz} proposed a notion of viscosity solution for semi-linear parabolic path-dependent PDEs that allows to extend the non-linear Feynman-Kac formula to non-Markovian case. \citet{ektz1} generalizes the definition of viscosity solutions introduced in \cite{ektz} to deal with fully non-linear path-dependent parabolic PDEs. Then, in \cite{ektz2} they prove a comparison result for such viscosity solutions that implies a well-posedness result. \citet{cosso} extended the results of \cite{ektz2} to the case of a possibly degenerate diffusion coefficient for the forward process driving the BSDE.
We remark that, although these approaches are useful to study solutions of path-dependent PDEs from a theoretical point of view and in applications, the problem studied in this thesis cannot be faced by means of viscosity or weak solutions. This is due to the fact that the change of variable formula for \naf s and the pathwise definition of the F\"ollmer integral are the key tools that allow us to achieve the robustness results, and they require smoothness ($\CC^{1,2}$ regularity) of the portfolio value functionals.
\chapter{A pathwise approach to continuous-time trading} \label{chap:path-trading} \chaptermark{Pathwise continuous-time trading}
The \ito\ theory of stochastic integration defines the integral of a general non-anticipative integrand as either an $L^2$ limit or a limit in probability of non-anticipative Riemann sums. The resulting integral is therefore defined almost-surely but does not have a well-defined value along a given sample path. If one interprets such an integral as the gain of a strategy, this poses a problem for interpretation: the gain cannot necessarily be defined for a given scenario, which does not make sense financially. It is therefore important to dispose of a construction which allows to give a meaning to such integrals in a pathwise sense.
In this Chapter, after reviewing in \Sec{lit-path} various approaches proposed in the literature for the pathwise construction of integrals with respect to stochastic processes, we present an analytical setting where the pathwise computation of the gain from trading is possible for a class of continuous-time trading strategies which includes in particular \lq delta-hedging\rq\ strategies. This construction also allows for a pathwise definition of the self-financing property.
\section{Pathwise integration and model-free arbitrage} \label{sec:lit-path}
\subsection{Pathwise construction of stochastic integrals} \label{sec:pathint}
A first attempt to a pathwise construction of the stochastic integral deals with Brownian integrals and dates back to the sixties, due to \citet{wong-zakai}. They stated that, for a restricted class of integrands, the sequence of Riemann-Stieltjes integrals obtained by replacing the Brownian motion with a sequence of approximating smooth integrators converges in mean square (hence pathwise along a properly chosen subsequence) to a Stratonovich integral. This approach is based on approximating the integrator process.
In 1981, \citet{bichteler} obtained almost-sure convergent subsequences by using stopping times. Namely, given a \caglad\ process $\phi$ and a sequence of non-negative real numbers $(c_n)_{n\geq0}$ such that $\sum\limits_{n\geq0}c_n<\infty$, by defining for each $n\geq0$ a sequence of stopping times $T^n_0=0$, $T_{k+1}^n=\inf\{t>T^n_k:\,|\phi(t)-\phi(T^n_k)|>c_n\}$, $k\geq 0$, for a certain class of integrands $M$ (more general than square-integrable martingales) the following holds: for almost all states $\w\in\O$, $\lf\int\phi\mathrm{d} M\rg(\w)$ is the uniform limit on every bounded interval of the pathwise integrals $\lf\int\phi^n\mathrm{d} M\rg(\w)$ of the approximating elementary processes $\phi^n(t)=\sum\limits_{k\geq0}\phi(T^n_k)\ind_{(T^n_k,T^n_{k+1}]}(t)$, $t\geq0$. Though Bichteler's method is constructive, it involves stopping times. Moreover, note that the $\PP$-null set outside of which convergence does not hold depends on $\phi$.
\subsubsection{Pathwise stochastic integration by means of ``skeleton approximations''}
In 1989, Willinger and Taqqu~\cite{willtaq} proposed a constructive method to compute stochastic integrals path-by-path by making both time and the probability space discrete. The discrete and finite case contains the main idea of their approach and shows the connection between the completeness property, i.e. the martingale representation property, and stochastic integration. It is given a probability space $(\O,\F,\PP)$ endowed with a filtration $\FF=(\F_t)_{t=0,1,\ldots,T}$ generated by minimal partitions of $\O$, $\F_t=\s(\P_t)$ for all $t=0,1,\ldots,T$, and an $\R^{d+1}$-valued $(\FF,\PP)$-martingale $Z=(Z(t))_{t=0,1,\ldots,T}$ with components $Z^0\equiv1$ and $Z^1(0)=\ldots=Z^d(0)=0$. They denote by $\Phi$ the space of all $\R^{d+1}$-valued $\FF$-predictable stochastic processes $\phi=(\phi(t))_{t=0,1,\ldots,T}$, where $\phi(t)$ is $\F_{t-1}$-measurable $\forall t=1,\ldots,T$, and such that \begin{equation}
\label{eq:wt_1} \phi(t)\cdot Z(t)=\phi(t+1)\cdot Z(t)\quad \PP\text{-a.s., }t=0,1,\ldots,T, \end{equation} where by definition $\phi_0\equiv\phi_1$. Property~\eqref{eq:wt_1} has an interpretation in the context of discrete financial markets as the \textit{self-financing} condition for a strategy $\phi$ trading the assets $Z$, in the sense that at each trading date the investor rebalances his portfolio without neither withdrawing nor paying any cash. Moreover, it implies $$(\phi\bullet Z)(t):=\phi(1)\cdot Z(0)+\sum_{s=1}^t\phi(s)\cdot(Z(s)-Z(s-1))=\phi(t)\cdot Z(t)\quad \PP\text{-a.s., }t=0,1,\ldots,T, $$ where $\phi\bullet Z$ is the \textit{discrete stochastic integral} of the predictable process $\phi$ with respect to $Z$. The last equation is still meaningful in financial terms, having on the left-hand side the initial investment plus the accumulated gain and on the right-hand side the current value of the portfolio. The $\R^{d+1}$-valued $(\FF,\PP)$-martingale $Z$ is defined to be \textit{complete} if for every real random variable $Y\in L^1(\O,\F,\PP)$ there exists $\phi\in\Phi$ such that for $\PP$-almost all $\w\in\O$, $Y(\w)=(\phi\bullet Z)(T,\w)$, i.e. \begin{equation}
\label{eq:wt_2}
\{\phi\bullet Z,\;\phi\in\Phi\}=L^1(\O,\F,\PP). \end{equation} The $(Z,\Phi)$-representation problem~\eqref{eq:wt_2} is reduced to a duality structure between the completeness of $Z$ and the uniqueness of an equivalent martingale measure for $Z$, which are furthermore proved (\citet{willtaq87}) to be equivalent to a technical condition on the flow of information and the dynamics of $Z$, that is: $\forall t=1,\ldots,T,\;A\in\P_{t-1},$ \begin{equation}
\label{eq:wt_3}
\dim\lf\mathrm{span}\lf\{Z(t,\w)-Z(t-1,\w),\,\w\in A\}\rg\rg=\sharp\{A'\subset\P_t:\,A'\subset A\}-1. \end{equation}
The discrete-time construction extends then to stochastic integrals of continuous martingales, by using a ``skeleton approach''. The probability space $(\O,\F,\PP)$ is now assumed to be complete and endowed with a filtration $\FF^Z=\FF=\Ft$, where $Z=(Z(t))_{t\in[0T]}$ denotes an $\R^{d+1}$-valued continuous $\PP$-martingale with the components $Z^0\equiv1$ and $Z^1(0)=\ldots=Z^d(0)=0$ $\PP$-a.s. and $\FF$ satisfies the usual condition and is \textit{continuous} in the sense that, for all measurable set $B\in\F$, the $(\FF,\PP)$-martingale $\lf\PP(B|\F_t)\rg_{t\in[0,T]}$ has a continuous modification. The key notion of the skeleton approach is the following. \begin{definition}\label{def:wt}
A triplet $(I^\z,\FF^\z,\z)$ is a \emph{continuous-time skeleton} of $(\FF,Z)$ if:
\begin{enumerate}
\item[(i)] $I^\z$ is a finite partition $0=t^\z_0<\ldots<t^\z_{N}=:T^\z\leq T$;
\item[(ii)] for all $t\in[0,T]$, $\F^\z_t=\sum\limits_{k=0}^{N-1}\F^\z_{t^\z_k}\ind_{[t^\z_k,t^\z_{k+1})}(t)$, such that for all $k=0,\ldots,N$ there exists a minimal partition of $\O$ which generates the sub-\ss $\F^\z_{t^\z_k}\subset\F_{t^\z_k}$;
\item[(iii)]for all $t\in[0,T]$, $\z(t)=\sum\limits_{k=0}^{N-1}\z_{t^\z_k}\ind_{[t^\z_k,t^\z_{k+1})}(t)$ where $\z_{t^\z_k}$ is $\F^\z_{t^\z_k}$-measurable for all $k=0,\ldots,N$.
\end{enumerate} Given an $\R^{d+1}$-valued stochastic process $\nu=(\nu(t))_{t\in[0,T]}$ and $I^\nu,\FF^\nu$ satisfying (i),(ii), $(I^\nu,\FF^\nu,\nu)$ is called a $\FF^\nu$-predictable (continuous-time) skeleton if, for all $t\in[0,T]$, $\nu(t)=\sum\limits_{k=1}^{N}\nu_{t^\nu_k}\ind_{(t^\nu_{k-1},t^\nu_k]}(t)$ where $\nu_{t^\nu_k}$ is $\F^\nu_{t^\nu_{k-1}}$-measurable for all $k=1,\ldots,N$. \\A sequence of continuous-time skeletons $(I^n,\FF^n,\z^n)_{n\geq0}$ is then called a continuous-time skeleton approximation of $(\FF,Z)$ if the sequence of time partitions $(I^n)_{n\geq0}=\{0=t^n_0<\ldots<t^n_{N^n}=:T^n\leq T\}_{n\geq0}$ has mesh going to 0 as $n\rightarrow\infty$, the \textit{skeleton filtrations} $\FF^n$ converge to $\FF$ in the sense that, for each $t\in[0,T]$, $$\F_t^0\subset\cdots\subset\F_t^{n-1}\subset\F_t^n\subset\s\lf\underset{k\geq0}\cup\F^k_t\rg=\F_t$$ and the \textit{skeleton processes} $\z^n$ converge to $Z$ uniformly in time, as $n\rightarrow\infty$, $\PP$-a.s. \end{definition}
Given $\bar Y\in L^1(\O,\F,\PP)$ and considered the $(\FF,\PP)$-martingale $Y=(Y(t))_{t\in[0,T]}$, $Y(t)=\EE^\PP[\bar Y|\F_t]\,\PP\text{-a.s.}$, the pathwise construction of stochastic integrals with respect to $Z$ runs as follows. \begin{enumerate}
\item Choose a complete continuous-time skeleton approximation $(I^n,\FF^n,\z^n)_{n\geq0}$ of $(\FF,Z)$ such that, defined $Y^n=\lf Y^n_t=\EE^\PP[\bar Y|\F^n_t]\,\PP\text{-a.s.}\rg_{t\in[0,T^n]}$ for all $n\geq0$, the sequence $(I^n,\FF^n,Y^n)_{n\geq0}$ defines a continuous-time skeleton approximation of $(\FF,Y)$.
\item Thanks to the completeness characterization in discrete time, for each $n\geq0$, there exists an $\FF^n$-predictable skeleton $(I^n,\FF^n,\phi^n)$ such that $$\phi^n(t^n_k)\cdot \z^n(t^n_k)=\phi^n(t^n_{k+1})\cdot \z^n(t^n_k)\quad \PP\text{-a.s., }k=0,1,\ldots,N^n,$$ and $$Y^n=(\phi^n\bullet\z^n)(T^n)=\phi^n(T^n)\cdot\z^n(T^n)\quad \PP\text{-a.s.} $$
\item Define the pathwise integral
\begin{equation}
\label{eq:wt_4}
\int_0^t\phi(s,\w)\cdot Z(s,\w):=\Limn(\phi^n\bullet\z^n)(t,\w),\quad t\in[0,T]
\end{equation} for $\PP$-almost all $\w\in\O$, namely on the set of scenarios $\w$ where the discrete stochastic integrals converge uniformly. \end{enumerate}
\citet{willtaq} applied their methodology to obtain a convergence theory in the context of models for continuous security market with exogenously given equilibrium prices. Thanks to the preservation of the martingale property and completeness and to the pathwise nature of their approximating scheme, they were able to characterize important features of continuous security models by convergence of ``real life'' economies, where trading occurs at discrete times. In particular, for a continuous security market model represented by a probability space $(\O,\F,\PP)$ and an $(\FF,\PP)$-martingale $Z$ on $[0,T]$, the notions of ``no-arbitrage'' and ``self-financing'' are understood through the existence of converging discrete market approximations $(T^n,\FF^n,\z^n)$ which are all free of arbitrage opportunities (as $\z^n$ is an $(\FF^n,\PP)$-martingale) and complete. Moreover, the characterization~\eqref{eq:wt_3} of completeness in finite market models relates the structure of the skeleton filtrations $\FF^n$ to the number of non-redundant securities needed to complete the approximations $\z^n$.
However, this construction lacks an appropriate convergence result of the sequence $(\phi^n)_{n\geq0}$ to the predictable integrand $\phi$; moreover it deals exclusively with a given martingale in the role of the integrator process, which restricts the spectrum of suitable financial models.
\subsubsection{Continuous-time trading without probability}
In 1994, \citet{bickwill} looked at the current financial modeling issues from a new perspective: they provided an economic interpretation of \follmer's pathwise \ito\ calculus in the field of continuous-time trading models. \follmer's framework turns out to be of interest in finance, as it allows to avoid any probabilistic assumption on the dynamics of traded assets and consequently any resulting model risk. Reasonably, only observed price trajectories are needed. Bick and Willinger reduced the computation of the initial cost of a replicating trading strategy to an exercise of analysis. For a given stock price trajectory (state of the world), they showed one is able to compute the outcome of a given trading strategy, that is the gain from trading.
The set of possible stock price trajectories is taken to be the space of positive \cadlag\ functions, $D([0,T],\R_+)$, and trading strategies are defined only based on the past price information.
They define a \textit{simple trading strategy} to be a couple $(V_0,\phi)$ where $V_0:\R_+\rightarrow\R$ is a measurable function representing the initial investment depending only on the initial stock price and $\phi:(0,T]\times D([0,T],\R_+)\rightarrow\R$ is such that, for any trajectory $S\in D([0,T],\R_+)$, $\phi(\cdot,S)$ is a \caglad\ stepwise function on a time grid $0\equiv\t_0(S)<\t_1(S)<\ldots<\t_m(S)\equiv T$, and satisfies the following \lq adaptation\rq\ property: for all $t\in(0,T],$ given $S_1,S_2\in D([0,T],\R_+)$, if $S_{1\mid_{(0,t]}}=S_{2\mid_{(0,t]}}$, then $\phi(t+,S_1)=\phi(t+,S_2)$, where $\phi(t+,\cdot):=\lim\limits_{u\searrow t}\phi(u,\cdot)$. The value $\phi(t,S)$ represents the amount of shares of the stock held at time $t$. They restrict the attention to self-financing portfolios of the stock and bond (always referring to their discounted prices), so that the number of bonds in the portfolio is described by the map $\psi:(0,T]\rightarrow\R$, $$\psi(t)=V_0(S(0))-\phi(0+,S)S(0)-\sum_{j=1}^{m}S(\t_j\wedge t)(\phi(t_{j+1}\wedge t,S)-\phi(t_j\wedge t,S)).$$ The cumulative gain is denoted by $$G(t,S)=\sum_{j=1}^m\phi(t_j\wedge t,S)(S(t_j\wedge t)-S(t_{j-1}\wedge t)).$$ The self-financing assumption supplies us with the following well-known equation linking the gain to the value of the portfolio, \begin{equation}\label{eq:bw_sf} V(t,S):=\psi(t)+\phi(t,S)S(t)=V_0(S(0))+G(t,S), \end{equation} and makes $V$ be a \cadlag\ function in time.
Then, they define a \textit{general trading strategy} to be a triple $(V_0,\phi,\Pi)$ where $\phi(\cdot,S)$ is more generally a \caglad\ function, satisfying the same \lq adaptation\rq\ property and $\Pi=(\pi_n(S))_{n\ge1}$ is a sequence of partitions $\pi_n\equiv\pi_n(S)=\{0=\t^n_0<\ldots<\t^n_{m^n}=T\}$ whose mesh tends to $0$ and such that $\pi_n\cap[0,t]$ depends only on the price trajectory up to time $t$. To any such triple is associated a sequence of simple trading strategies $\{(V_0,\phi^n)\}$, where $\phi^n(t,S)=\sum\limits_{j=1}^{m^n}\ind_{(\t^n_j,\t^n_{j+1}]}\phi(\t^n_j+,S)$, and for each $n\geq1$ the correspondent numbers of bonds, cumulative gains and portfolio values are denoted respectively by $\psi^n$, $G^n$ and $V^n$. They define a notion of \textit{convergence for $S$} of a general trading strategy $(V_0,\phi,\Pi)$ involving several conditions, that we simplify in the following: \begin{enumerate} \item $\displaystyle \exists \Limn\psi^n(t,S)=:\psi(t,S)<\infty$ for all $t\in(0,T]$; \item $\psi(\cdot,S)$ is a \caglad\ function; \item $\psi(t+,S)-\psi(t,S)=-S(t)\lf\phi(t+,S)-\phi(t,S)\rg$ for all $t\in(0,T)$. \end{enumerate} The limiting gain and portfolio value of the approximating sequence, if exist, are denoted by $\displaystyle G^n(t,S)=\Limn G(t,S)$ and $\displaystyle V(t,S)=\Limn V^n(t,S)$. Note that condition 1. can be equivalently reformulated in terms of $G$ or $V$ and, in case it holds, equation \eq{bw_sf} is still satisfied by the limiting quantities. Assuming 1., Condition 2. is equivalent to the equation \begin{equation}\label{eq:bw_sf2} V(t,S)-V(t-,S)=\phi(t,S)(S(t)-S(t-)) \quad \forall t\in(0,T], \end{equation} while condition 3. equates to the right-continuity of $V(\cdot,S)$. In this setting, the objects of main interest can be expressed in terms of properly defined \lq one-sided\rq\ integrals, namely \begin{equation}\label{eq:bw_psi} \psi(t,S)=V_0(S(0))-\phi(0+,S)S(0)-\!\!\rint_0^t S(u) \mathrm{d} \phi(u+,S)+S(t)(\phi(t+,S)-\phi(t,S)), \end{equation} where the \emph{right integral} of $S$ with respect to $(\phi(\cdot+,S),\Pi)$ is defined as \begin{equation}\label{eq:bw_right} \rint_0^t S(u) \mathrm{d} \phi(u+,S):=\Limn \sum_{j=1}^{m^n}S(\t^n_j\wedge t) \lf\phi((t^n_j\wedge t)+,S)-\phi((t_{j-1}\wedge t)+,S) \rg,
\end{equation} and $G(t,S)=\lint_0^t\phi(u+,S)\mathrm{d} S(u)$, where the \emph{left integral} of $\phi(\cdot+,S)$ with respect to $(S,\Pi)$ is defined as \begin{equation}\label{eq:bw_left} \lint_0^t\phi(u+,S)\mathrm{d} S(u):=\Limn \sum_{j=1}^{m^n}\phi(\t^n_{j-1}+,S) \lf S(t^n_j\wedge t)-S(t_{j-1}\wedge t) \rg. \end{equation} The existence and finiteness of either integral is equivalent to condition 1., hence equation~\eq{bw_sf} turns into the following integration-by-parts formula: $$\lint_0^t\phi(u+,S)\mathrm{d} S(u)=\phi(t+,S)S(t)-\phi(0+,S)S(0)-\rint_0^t S(u) \mathrm{d} \phi(u+,S).$$ It is important to note that the one-sided integrals can exist even if the correspondent Riemann-Stieltjes integrals do not, in which case the right-integral may differ in value from the left-integral with respect to the same functions. When the Riemann-Stieltjes integrals exist, they necessarily coincide respectively with \eq{bw_right} and \eq{bw_left}. Moreover, these latter are associated to a specific sequence of partitions $\Pi$ along which convergence for $S$ holds true. Once established the set-up, \citeauthor{bickwill} provide a few examples showing how to compute the portfolio value in different situations where convergence holds for $S$ in a certain sub-class of $ D([0,T],\R_+)$, along an arbitrary sequence of partitions.
Finally, they use the pathwise calculus introduced in \citep{follmer} to compute the portfolio value of general trading strategies depending only on time and on the current observed price in a smooth way.
Their two main claims, slightly reformulated, are the following. \begin{proposition}[Proposition 2 in \cite{bickwill}]\label{prop:bw1}
Let $f:[0,T]\times\R_+\rightarrow\R$ be such that $f\in\C^2([0,T)\times\R_+)\cap\C(\{T\}\times\R_+)$ and $\Pi$ be a given sequence of partitions whose mesh tends to $0$. If the price path $S\in D([0,T],\R_+)$ has finite quadratic variation along $\Pi$ and if $f,\partial_{x}f,\partial_{t}f,\partial_{tx}f,\partial_{xx}f,\partial_{tt}f$ have finite left limits in $T$, then the trading strategy $(0,\phi,\Pi)$, where $\phi(t,S)=f_x(t-,S(t-))$, converges for $S$ and its portfolio value at any time $t\in[0,T]$ is given by
\begin{align}
\lint_0^t\phi(u+,S)\mathrm{d} S(u)={}& f(t,S(t))-f(0,S(0))-\int_0^t\partial_{t}f(u,S(u))\mathrm{d} u \label{eq:bw1}\\ &{} -\frac12\int_{[0,t]}\partial_{xx}f(u-,S(u-))\mathrm{d}[S](u) \nonumber \\ \nonumber & {}-\sum_{u\leq t}\Big[f(u,S(u))-f(u-,S(u-)) \\ \nonumber & \left. {}-\partial_{x}f(u-,S(u-))\De S(u)-\frac12\partial_{xx}f(u-,S(u-))\De S^2(u)\right] .
\end{align} \end{proposition} This statement is a straightforward application of the \follmer's equation~\eq{follmer_Dito} by the choice $x(t)=(t,S(t))$, which makes the definition of the \follmer's integral~\eq{follmer_int} equivalent to the sum of a Riemann integral and a left-integral, i.e. $$\int_0^t\nabla f(x(u-))\cdot \mathrm{d} x(u)=\int_0^t\partial_tf(u,S(u))\mathrm{d} u+\lint_0^t\partial_xf(u,S(u))\mathrm{d} S(u).$$
Moreover, the convergence is ensured by remarking that the pathwise formula~\eq{bw1} implies that the portfolio value $V(t,S)=\lint_0^t\phi(u+,S)\mathrm{d} S(u)$ is a \cadlag\ function and has jumps $$\De V(t)= \partial_xf(t-,S(t-))\De S(t)=\phi(t,S)\De S(t)\text{ for all }t\in(0,T],$$ hence conditions 2. and 3. are respectively satisfied.
The second statement is a direct implication of the previous one and provides a non-probabilistic version of the pricing problem for one-dimensional diffusion models. \begin{proposition}[Proposition 3 in \cite{bickwill}]\label{prop:bw2}
Let $f:[0,T]\times\R_+\rightarrow\R$ be such that $f\in\C^2([0,T)\times\R_+)\cap\C(\{T\}\times\R_+)$ and $f,\partial_{x}f,\partial_{t}f,\partial_{tx}f,\partial_{xx}f,\partial_{tt}f$ have finite left limits in $T$, and let $\Pi$ be a given sequence of partitions whose mesh tends to $0$. Assume that $f$ satisfies the partial differential equation \begin{equation}\label{eq:bw_pde} \partial_tf(t,x)+\frac12\b^2(t,x)\partial_{xx}f(t,x)=0,\quad t\in[0,T],x\in\R_+, \end{equation} where $\b:[0,T]\times\R_+\rightarrow\R$ is a continuous function. If the price path $S\in D([0,T],\R_+)$ has finite quadratic variation along $\Pi$ of the form $[S](t)=\int_0^t\b^2(u,S(u))\mathrm{d} u$ for all $t\in[0,T]$, then the trading strategy $(f(0,S(0)),\phi,\Pi)$, where $\phi(t,S)=\partial_xf(t-,S(t-))$, converges for $S$ and its portfolio value at time $t\in[0,T]$ is $f(t,S(t))$. \end{proposition}
Following Bick and Willinger's approach, all that has to be specified is the set of all possible scenarios and the trading instructions for each possible scenario. The investor's probabilistic beliefs can then be considered as a way to express the set of all possible scenarios together with their odds, however there may be no need to consider them. Indeed, by taking any financial market model in which the price process satisfies almost surely the assumptions of either above proposition, the portfolio value of the correspondent trading strategy, computed pathwise, will provide almost surely the model-based value of such portfolio. In this way, on one hand the negligible set outside of which the pathwise results do not hold depends on the specific sequence of time partitions, but on the other hand we get a path-by-path interpretation of the hedging issue, which was missing in the stochastic approach.
\subsubsection{Karandikar's pathwise construction of stochastic integrals}
In 1994, \citet{karandikar} proposed another pathwise approach to stochastic integration for continuous time stochastic processes. She proved a pathwise integration formula, first for Brownian integrals, then for the general case of semimartingales and a large class of integrands. It is fixed a complete probability space $(\O,\F,\PP)$, equipped with a filtration $(\F_t)_{t\geq0}$ satisfying the usual conditions. \begin{proposition}[Pathwise Brownian integral, \cite{karandikar}]
Let $W$ be a $(\F_t)$-Brownian motion and $Z$ be a \cadlag\ $(\F_t)$-adapted process. For all $n\geq1$, let $\{\t^n_1\}_{i\geq0}$ be the random time partition defined by
$$\t^n_0:=0,\quad\t^n_{i+1}:=\inf\{t\geq\t^n_i:\:|Z(t)-Z(\t^n_i)|\geq2^{-n}\},\quad i\geq0,$$ and $(Y^n(t))_{t\geq0}$ be a stochastic process defined by, for all $t\in[0,\infty)$, $$Y^n(t):=\sum_{i=0}^{\infty}Z(\t^n_i\wedge t)(W(\t^n_{i+1}\wedge t)-W(\t^n_i\wedge t)).$$
Then, for all $T\in[0,\infty)$, almost surely, $\displaystyle \sup_{t\in[0,T]}\left|Y^n(t)-\int_0^tZ\mathrm{d} W\right|\limn0$. \end{proposition}
The proof hinges on the Doob's inequality for $p=2$, which says that a \cadlag\ martingale $M$ such that, for all $t\in[0,T]$, $\EE[|M(t)|^2]<\infty$, satisfies $$\norm{\sup_{t\in[0,T]}|M(t)|}_{L^2(\PP)} \leq 4\norm{M(T)}_{L^2(\PP)}.$$ Indeed, by taking $M(t)=\int_0^t(Z^n-Z)\mathrm{d} W$, where $\displaystyle Z^n:=\sum_{i=1}^{\infty}Z(\t^n_{i-1})\ind_{[\t^n_{i-1},\t^n_i)}$, the Doob's inequality holds and gives
$$\EE\left[\sup_{t\in[0,T]}\left|Y^n(t)-\int_0^tZ\mathrm{d} W\right|^2\right]\leq 4T2^{-2n},$$ by the definitions of $\{\t^n_i\}$ and $Y^n$.
\\Finally, by denoting $\displaystyle U_n:=\sup_{t\in[0,T]}\left|Y^n(t)-\int_0^tZ\mathrm{d} W\right|$, the H\"older's inequality implies that $$\EE\left[\sum_{n\geq1}U_n\right]\leq2\sqrt{T}\sum_{n\geq1}2^{-n}<\infty,$$ hence, almost surely, $\sum\limits_{n\geq1}U_n<\infty$, whence the claim.
The generalization to semimartingale integrators is the following. \begin{proposition}[Pathwise stochastic integral, \cite{karandikar}] \label{Prop:kar}
Let $X$ be a semimartingale and $Z$ be a \cadlag\ $(\F_t)$-adapted process. For all $n\geq1$, let $\{\t^n_1\}_{i\geq0}$ be the time partition defined as in the previous theorem and $Y^n$ be the process defined by, for all $t\in[0,\infty)$, $$Y^n(t):=Z(0)X(0)+\sum_{i=1}^{\infty}Z(\t^n_{i-1}\wedge t)(X(\t^n_{i}\wedge t)-X(\t^n_{i-1}\wedge t)).$$ Then, for all $T\in[0,\infty)$, almost surely,
$$\displaystyle \sup_{t\in[0,T]}\left|Y^n(t)-\int_0^tZ(u-)\mathrm{d} X(u)\right|\limn0.$$ \end{proposition} The proof is carried out analogously to the Brownian case, using some basic properties of semimartingales and predictable processes. Precisely, $X$ can be decomposed as $X=M+A$, where $M$ is a locally square-integrable martingale and $A$ has finite variation on bounded intervals, and let $\{\s_k\}_{k>0}$ be a sequence of stopping times increasing to $\infty$ such that $C_k=\EE\left[\pqv{M}(\s_k)\right]<\infty$. By rewriting $Y^n(t)=\int_0^tZ^n\mathrm{d} X$, where $$Z^n:=Z(0)\ind_{0}+\sum_{i=1}^\infty Z(\t^n_i)\ind_{(\t^n_i,\t^n_{i+1}]},$$ the Doob's inequality gives
$$\EE\left[\sup_{t\in[0,\s_k]}\left|\int_0^t(Z^n(u)-Z(u-))\mathrm{d} M\right|^2\right]\leq 4C_k2^{-2n},$$ by the definitions of $\{\t^n_i\}$. Then, proceeding as before and using $\s_k\nearrow\infty$, for all $T\in[0,\infty)$, almost surely
$$\displaystyle \sup_{t\in[0,T]}\left|\int_0^t(Z^n(u)-Z(u-))\mathrm{d} M(u)\right|\limn0.$$ As regards the Stieltjes integrals with respect to $A$, the uniform convergence of $Z^n$ to the left-continuous version of $Z$ implies directly that, almost surely,
$$\displaystyle \sup_{t\in[0,T]}\left|\int_0^t(Z^n(u)-Z(u-))\mathrm{d} A(u)\right|\limn0.$$
The main tool in Karandikar's pathwise characterization of stochastic integrals is the martingale Doob's inequality. A recent work by \citet{traj-doob} establishes a deterministic version of the Doob's martingale inequality, which provides an alternative proof of the latter, both in discrete and continuous time.
Using the trajectorial counterparts, they also improve the classical Doob's estimates for non-negative \cadlag\ submartingales by using the initial value of the process, obtaining sharp inequalities.
These continuous-time inequalities are proven by means of ad hoc constructed pathwise integrals. First, let us recall the following notion of pathwise integral (see \cite[Chapter 2.5]{norvaisa}): \begin{definition}
Given two \cadlag\ functions $f,g:[0,T]\rightarrow[0,\infty)$, the \textrm{Left Cauchy-Stieltjes integral} of $g$ with respect to $f$ is defined as the limit, denoted $(LCS)\!\!\int_0^Tg\mathrm{d} f$, of the directed function $(S_{LC}(f;\cdot),\mathfrak R)$, where the \emph{Left Cauchy sum} is defined by
\begin{equation}
\label{eq:left-C-S}
S_{LC}(g,f;\k):=\sum_{t_i\in\k}g(t_i)(f(t_{i+1})-f(t_i)),\quad\k\in P[0,T].
\end{equation}
\end{definition} \citet{traj-doob} are interested in the particular case where the integrand is of the form $g=h(\bar f)$ and $h:[0,\infty)\rightarrow[0,\infty)$ is a continuous monotone function. In this case, the limit of the sums in \eq{left-C-S} in the sense of refinements of partitions exists if and only if its predictable version $\displaystyle (LCS)\!\!\int_0^Tg(t-)\mathrm{d} f(t):=\Limn\sum_{t^n_i\in\pi^n}g(t^n_i-)(f(t^n_{i+1})-f(t^n_i))$ exists for every dense sequence of partitions $(\pi^n)_{n\geq0}$, in which case the two limits coincide. By monotonicity of $g$ and rearranging the finite sums, it follows that $\int_0^Tg(t)\mathrm{d} f(t)$ is well defined if and only if $\int_0^Tf(t)\mathrm{d} g(t)$ is; if so, they lead to the following integration-by-parts formula: \begin{align}
\nonumber
(LCS)\!\!\int_0^Tg(t)\mathrm{d} f(t)={}&g(T)f(T)-g(0)f(0)-(LCS)\!\!\int_0^Tf(t)\mathrm{d} g(t)\\ &{}-\sum_{0\leq t\leq T}\De g(t)\De f(t).\label{eq:ibp} \end{align} By the assumptions on $h$, the two integrals exist and the equation \eq{ibp} holds. Moreover, given a martingale $S$ on $(\O,\F,(\F_t)_{t\geq},\PP)$ and taking $f$ to be the path of $S$, the Left Cauchy-Stieltjes integral coincides almost surely with the \ito\ integral, i.e. $$(h(\bar S)\bullet S)(T,\w)=(LCS)\!\!\int_0^Th(\bar S(t-,\w))\mathrm{d} S(t,\w),\text{ for $\PP$-almost all }\w\in\O.$$ Indeed, \citet{karandikar} showed the almost sure uniform convergence of the sums in \eq{left-C-S} to the \ito\ integral along a specific sequence of random partitions; therefore, by the existence of the pathwise integral and uniqueness of the limit, the two coincide.
The trajectorial Doob inequality obtained in continuous time and using the pathwise integral defined above is the following. \begin{proposition}
Let $f:[0,T]\rightarrow[0,\infty)$ be a \cadlag\ function, $1<p<\infty$ and $h(x):=-\frac{p^2}{p-1}x^{p-1}$, then $$\bar{f}^p(T)\leq(LCS)\!\!\int_0^Th(\bar{f}(t))\mathrm{d} f(t)-\frac p{p-1}f(0)^p+\lf\frac p{p-1}\rg^pf(T)^p.$$ \end{proposition}
\subsubsection{Pathwise integration under a family of measures}
In 2012, motivated by problems involving stochastic integrals under families of measures, \citet{nutz-int} proposed a different pathwise ``construction'' of the \ito\ integral of an arbitrary predictable process under a general set of probability measures $\P$ which is not dominated by a finite measure and under which the integrator process is a semimartingale. However, his result concerns only existence and does not provide a constructive procedure to compute such integral.
Let us briefly recall his technique. It is fixed a measurable space $(\O,\F)$ endowed with a right-continuous filtration $\FF^*=(\F^*_t)_{t\in[0,1]}$ which is $\P$-universally augmented. $X$ denotes a \cadlag\ $(\FF^*,\PP)$-semimartingale for all $\PP\in\P$ and $H$ is an $\FF^*$-predictable process. The approach is to average $H$ in time in order to obtain approximating processes of finite variation which allow to define (pathwise) a sequence of Lebesgue-Stieltjes integrals converging in medial limit to the \ito\ integrals. To this aim, a domination assumption is needed, but it is imposed at the level of characteristics, thus preserving the non-dominated nature of $\P$ encountered in all applications. So, it is assumed that there exists a predictable \cadlag\ increasing process $A$ such that $$B^\PP+\pqv{X^c}^\PP+(x^2\wedge1)\ast\nu^\PP\ll A \quad \PP\text{-a.s., for all }\PP\in\P,$$
where $(B^\PP,\pqv{X^c}^\PP,\nu^\PP)$ is the canonical triplet (i.e. the triplet associated with the truncation function $h(x)=x\ind_{\{|x|<1\}}$) of predictable characteristics of $X$. The main result is the following. \begin{theorem}
Under the assumption above, there exists an $\FF^*$-adapted \cadlag\ process $\lf\int_0^t H\mathrm{d} X\rg_{t\in[0,1]}$ such that $\int_0^\cdot H\mathrm{d} X=(H\bullet X)^\PP$ $\PP$-almost surely, for all $\PP\in\P$, where the construction of $\lf\int H\mathrm{d} X\rg(\w)$ involves only $H(\w)$ and $X(\w)$. \end{theorem}
The proof stands on two lemmas. Without loss of generality and to simplify notation, it is set $X(0)=0$ and defined $H(t)=A(t)=0$ for all $t<0$; it is also assumed that $X$ has bounded jumps, $|\De X|\leq1$, $H$ is uniformly bounded, $|H|\leq c$, and $A(t)-A(s)\geq t-s$ for all $0\leq s\leq t\leq1$. \begin{lemma}\label{lem:nutz1}
For all $n\geq1$, the processes $H^n,Y^n$, defined by $$\bea{l} H^n(0):=0,\quad H^n(t):=\frac1{A_t-A_{t-\frac1n}}\int_{t-\frac1n}^tH(s)\mathrm{d} A(s),\quad 0<t\leq1,\\ Y^n:=H^nX-\int_0^\cdot X(s-)\mathrm{d} H^n(s), \end{array}$$ are well defined (pathwise) in the Lebesgue-Stieltjes sense and $$Y^n=(H^n\bullet X)^\PP\ \PP\text{-a.s.},\quad Y^n\limucp(H\bullet X)^\PP\text{ for all }\PP\in\P.$$ \end{lemma}
\begin{lemma}\label{lem:nutz2}
Let $(Y^n)_{n\geq1}$ be a sequence of $\FF^*$-adapted \cadlag\ processes and assume that for each $\PP\in\P$ there exists a \cadlag\ process $Y^\PP$ such that $Y^n(t)\limnp Y^\PP(t)$ for all $t\in[0,1]$. Then, there exists an $\FF^*$-adapted \cadlag\ process $Y$ such that $Y=Y^\PP$ $\PP$-almost surely for all $\PP\in\P$. \end{lemma}
The first claim in Lemma \ref{lem:nutz1} is a consequence of the assumptions on $H,A$, while the convergence in $ucp(\PP)$ is implied by the $L^2(\PP)$ convergence
$$\EE^\PP\left[\sup_{t\in[0,1]}\left|\int_0^tH^n(s)\mathrm{d} X(s)-\int_0^tH(s)\mathrm{d} X(s)\right|^2\right]\limn0,$$ which in turn is proven thanks to the convergence of $H^n(\w)$ to $H(\w)$ in $L^1([0,1],\mathrm{d} A(\w))$ for all $\w\in\O$.
Instead, Lemma \ref{lem:nutz2} relies on the notion of \emph{Mokobodzki's medial limit}, a kind of \lq projective limit\rq\ of convergence in measure. More precisely, the medial limit $\limmed$ is a map on the set of real sequences, such that, if $(Z_n)_{n\geq1}$ is a sequence of random variables on a measurable space, the medial limit defines a universally measurable random variable Z, $Z(\w):=\limmed Z_n(\w)$, such that, if for some probability measure $\PP$, $Z_n\limnp Z^\PP$, then $Z^\PP=Z$ $\PP$-almost surely.
However, as anticipated above, Nutz's method does not give a pathwise computation of stochastic integrals, though it supplies us with a process which coincides $\PP$-almost surely with the $\PP$-stochastic integral for each $\PP$ in the set of measures $\P$ and is a limit in $ucp(\PP)$ of approximating Stieltjes integrals.
\subsection{Model-free arbitrage strategies} \label{sec:arbitrage}
Once we have at our disposal a pathwise notion of gain process, a natural next step is to examine the corresponding notion of arbitrage strategy.
The literature investigating arbitrage notions in financial markets admitting uncertainty is recent and there are different approaches to the subject. The mainstream approach is that of model-uncertainty, where arbitrage notions are reformulated for families of probability measures in a way analogous to the classical case of a stochastic model. However, most of the contributions in this direction deal with discrete-time frameworks. In continuous time, recent results are found in \cite{sara-bkn}.
An important series of papers exploring arbitrage-like notions by a model-free approach is due to Vladimir Vovk (see e.g. \citet{vovk-vol,vovk-proba,vovk-rough,vovk-cadlag}).
He introduced an outer measure (see \cite[Definition 1.7.1]{tao} for the definition of \emph{outer measure}) on the space of possible price paths, called \emph{upper price} (\defin{upperP}), as the minimum super-replication price of a very special class of European contingent claims. The important intuition behind this notion of upper price is that the sets of price paths with zero upper price, called \emph{null sets}, allow for the infinite gain of a positive portfolio capital with unitary initial endowment. The need to guarantee this type of market efficiency in a financial market leads to discard the null sets. \citeauthor{vovk-proba} says that a property holds for \emph{typical paths} if the set of paths where it does not hold is null, i.e. has zero upper price. Let us give some details. \begin{definition}[Vovk's upper price]\label{def:upperP}
The \emph{upper price} of a set $E\subset\O$ is defined as \begin{equation}\label{eq:upperP}
\bar\PP(E):=\inf_{S\in\S}\{S(0)|\,\forall\w\in\O,\; S(T,\w)\geq\ind_E(\w)\}, \end{equation} where $\S$ is the set of all \emph{positive capital processes} $S$, that is: $S=\sum_{n=1}^\infty\K^{c_n,G_n}$, where $\K^{c_n,G_n}$ are the portfolio values of bounded simple predictable strategies trading at a non-decreasing infinite sequence of stopping times $\{\t^n_i\}_{i\geq1}$, such that for all $\w\in\O$ $\t^n_i(\w)=\infty$ for all but finitely many $i\in\NN$, with initial capitals $c_n$ and with the constraints $\K^{c_n,G_n}\geq0$ on $[0,T]\times\O$ for all $n\in\NN$ and $\sum_{n=1}^\infty c_n<\infty$. \end{definition} It is immediate to see that $\bar\PP(E)=0$ if and only if there exists a positive capital process $S$ with initial capital $S(0)=1$ and infinite capital at time $T$ on all paths in $E$, i.e. $S(T,\w)=\infty$ for all $\w\in E$.
Depending on what space $\O$ is considered, Vovk obtained specific results. In particular, he investigated properties of typical paths that concern their measure of variability. The most general framework considered is $\O=D([0,T],\R_+)$. He proved in \cite{vovk-rough} that typical paths $\w$ have a \emph{$p$-variation index} less or equal to 2, which means that the $p$-variation is finite for all $p>2$, but we have no information for $p=2$ (a stronger result is stated in \cite[Proposition 1]{vovk-rough}). If we relax the positivity and we restrict to \cadlag\ path with all components having \lq moderate jumps\rq\ in the sense of \eq{mod-jumps}, then \citet{vovk-cadlag} obtained appealing results regarding the quadratic variation of typical paths along special sequences of random partitions. Indeed, by adding a control on the size of the jumps, in the sense of considering the sample space $\O=\O_\psi$, defined as \begin{equation}\label{eq:mod-jumps}
\O_\phi:=\left\{\w\in D([0,T],\R)\bigg|\,\forall t\in(0,T],\;\abs{\De\w(t)}\leq\psi\lf \sup_{s\in[0,t)}\abs{\w(s)}\rg\right\} \end{equation} for a given non-decreasing function $\psi:[0,\infty)$, \citet{vovk-cadlag} obtained finer results. In particular, he proved the existence for typical paths of the quadratic variation in \defin{qv-vovk} along a special sequence of nested vertical partitions. It is however important to remark (\cite[Proposition 1]{vovk-cadlag}) that the same result applies to all sequences of nested partitions of dyadic type, and that any two sequences of dyadic type give the same value of quadratic variation for typical paths. A sequence of nested partitions is called of \emph{dyadic type} if it is composed of stopping times such that there exist a polynomial $p$ and a constant $C$ and \begin{enumerate} \item for all $\w\in\O_\psi$, $n\in\NN_0$, $0\leq s<t\leq T$, if $\abs{\w(t)-\w(s)}>C2^{-n}$, then there is an element of the $n^{th}$ partition which belongs to $(s,t]$, \item for typical $\w$, from some $n$ on, the number of finite elements of the $n^{th}$ partition is at most $p(n)2^{2n}$. \end{enumerate}
The sharper results are obtained when the sample space is $\O=C([0,T],\R)$ (or equivalently $\O=C([0,T],[0,\infty))$). In this case, in \cite{vovk-proba} it is proved that typical paths are constant or have a $p$-variation which is finite for all $p>2$ and infinite for $p\leq2$ (stronger results are stated in \cite[Corollaries 4.6,4.7]{vovk-proba}. Note that the situation changes remarkably from the space of \cadlag\ paths to the space of continuous paths. Indeed, no (positive) \cadlag\ path which is bounded away from zero and has finite total variation can belong to a null set in $D([0,T],\R^d_+)$, while all continuous paths with finite total variation belong to a null set in $C([0,T],\R^d_+)$.
A similar notion of outer measure is introduced by \citet{perk-promel} (see also \citet{perkowski-thesis}), which is more intuitive in terms of hedging strategies. He considers portfolio values that are limits of simple predictable portfolios with the same positive initial capital and whose correspondent simple trading strategies never risk more than the initial capital. \begin{definition}[Definition 3.2.1 in \cite{perkowski-thesis}]\label{def:outerP}
The \emph{outer content} of a set $E\subset\O:=C([0,T],\R^d)$ is defined as \begin{equation}\label{eq:outerP}
\tilde\PP(E):=\inf_{(H^n)_{n\geq1}\in\H_{\l,s}}\{\l|\,\forall\w\in\O,\; \liminf_{n\to\infty}(\l+(H^n\bullet\w)(T))\geq\ind_E(\w)\}, \end{equation} where $\H_{\l,s}$ is the set of all \emph{$\l$-admissible simple strategies}, that is of bounded simple predictable strategies $H^n$ trading at a non-decreasing infinite sequence of stopping times $\{\t^n_i\}_{i\geq1}$, $\t^n_i(\w)=\infty$ for all but finitely many $i\in\NN$ for all $\w\in\O$, such that $(H^n\bullet\w)(t)\geq-\l$ for all $(t,\w)\in[0,T]\times\O$. \end{definition} Analogously to Vovk's upper price, the $\tilde\PP$-null sets are identified with the sets where the inferior limit of some sequence of 1-admissible simple strategies brings infinite capital at time $T$. This characterization is shown to be a model-free interpretation of the condition of \emph{no arbitrage of the first kind} (NA1) from mathematical finance, also referred to as \emph{no unbounded profit with bounded risk} (see e.g. \cite{kk2007,kardaras}). Indeed, in a financial model where the price process is a semimartingale on some probability space $(\O,\F,\PP)$, the (NA1) property holds if the set $\{1+(H\bullet S)(T),\,H\in\H_1\}$ is bounded in $\PP$-probability, i.e. if $$\lim_{c\to\infty}\sup_{H\in\H_{1,s}}\PP(1+(H\bullet S)(T)\geq c)=0.$$ On the other hand, \cite[Proposition 3.28]{perkowski-thesis} proved that an event $A\in\F$ which is $\tilde\PP$-null has zero probability for any probability measure on $(\O,\F)$ such that the coordinate process satisfies (NA1).
However, the characterization of null sets in \cite{perk-promel,perkowski-thesis} is possibly weaker than Vovk's one. In fact, the outer measure $\tilde\PP$ is dominated by the outer measure $\bar\PP$.
A distinct approach to a model-free characterization of arbitrage is proposed by \citet{riedel}, although he only allows for static hedging. He considers a Polish space $(\O,\mathrm{d})$ with the Borel sigma-field and he assumes that there are $D$ uncertain assets in the market with known non-negative prices $f_d\geq0$ at time 0 and uncertain values $S_d$ at time $T$, which are continuous on $(\O,\mathrm{d})$, $d=1,\ldots,D$. A portfolio is a vector $\pi$ in $\R^{D+1}$ and it is called an \emph{arbitrage} if $\pi\cdot f\leq0$, $\pi\cdot S\geq0$ and $\pi\cdot S(\w)>0$ for some $\w\in\O$, where $f_0=S_0=1$. Thus the classical ``almost surely'' is replaced by ``for all scenarios'' and ``with positive probability'' is replaced by ``for some scenarios''. The main theorem in \cite{riedel} is a model-free version of the FTAP and states that the market is \emph{arbitrage-free} if and only if there exists a \emph{full support martingale measure}, that is a probability measure whose topological support in the polish space of reference is the full space and under which the expectation of the final prices $S$ is equal to the initial prices $f$. This is proven thanks to the continuity assumption of $S(\w)$ in $\w$ on one side and a separation argument on the other side. Even without a prior probability assumption, it shows that, if there are no (static) arbitrages in the market, it is possible to introduce a pricing probability measure, which assigns positive probability to all open sets.
\section{The setting} \label{sec:setting}
We consider a continuous-time frictionless market open for trade during the time interval $[0,T]$, where $d$ risky (non-dividend-paying) assets are traded besides a riskless security, named \lq bond'. The latter is assumed to be the numeraire security and we refer directly to the forward asset prices and portfolio values, which makes this framework of simplified notation without loss of generality.
Our setting does not make use of any (subjective) probabilistic assumption on the market dynamics and we construct trading strategies based on the realized paths of the asset prices.
Precisely, we consider the metric space $(\O,||\cdot||_\infty)$, $\O:=D([0,T],\R^d_+)$, provided with the Borel sigma-field $\F$ and the canonical filtration $\FF=\Ft$, that is the natural filtration generated by the coordinate process $S$, $S(t,\w):=\w(t)$ for all $\w\in\O$, $t\in[0,T]$. Thinking of our financial market, $\O$ represents the space of all possible trajectories of the asset prices up to time $T$. When considering only continuous price trajectories, we will restrict to the subspace $\O^0:=C([0,T],\R^d_+)$.
In such analytical framework, we think of a continuous-time path\hyp dependent trading strategy as determined by the value of the initial investment and the quantities of asset and bond holdings, given by functions of time and of the price trajectory. \begin{definition}
A \emph{trading strategy} in $(\O,\F)$ is any triple $(V_0,\phi,\psi)$, where $V_0:\O\to\R$ is $\F_0$-measurable and $\phi=(\phi(t,\cdot))_{t\in(0,T]},\psi=(\psi(t,\cdot))_{t\in(0,T]}$ are $\FF$-adapted \caglad\ processes on $(\O,\F)$, respectively with values in $\R^d$ and in $\R$. The portfolio value $V$ of such trading strategy at any time $t\in[0,T]$ and for any price path $\w\in\O$ is given by $$V(t,\w;\phi,\psi)=\phi(t,\w)\cdot\w(t)+\psi(t,\w).$$ \end{definition}
Economically speaking, $\phi(t,\w),\psi(t,\w)$ represent the vectors of the number of assets and bonds, respectively, held in the trading portfolio at time $t$ in the scenario $w\in\O$. The left-continuity of the trading processes comes from the fact that any revision to the portfolio will be executed the instant just after the time the decision is made. On the other hand, their right-continuous modifications $\phi(t+,\w),\psi(t+,\w)$, defined by $$\phi(t+,\w):=\lim\limits_{s\searrow t}\phi(s,\w),\ \psi(t+,\w):=\lim\limits_{s\searrow t}\psi(s,\w),\quad\forall\w\in\O,\,t\in[0,T)$$ represent respectively the number of assets and bonds in the portfolio just after any revision of the trading portfolio decided at time $t$. The choice of strategies adapted to the canonical filtration conveys the realistic assumption that any trading decision makes use only of the price information available at the time it takes place.
We aim to identify \emph{self-financing trading strategies} in this pathwise framework, that is portfolios where changes in the asset position are necessarily financed by buying or selling bonds without adding or withdrawing any cash. In particular, we look for those of them which trade continuously in time but still allow for an explicit computation of the gain from trading. In the classical literature about continuous-time financial market models, unlike for discrete-time models, we don't have a general pathwise characterization of self-financing dynamic trading strategies, mainly because of the probabilistic characterization of the gain in terms of a stochastic integral with respect to the asset price process. In the same way, the number of bonds which continuously rebalances the portfolio has no pathwise representation.
Here, we start from considering strategies where the portfolio is rebalanced only a finite number of times, for which the self-financing condition is well established and whose gain is given by a pathwise integral, equal to a Riemann sum.
Henceforth, we will take as given a dense nested sequence of time partitions, $\Pi=(\pi^n)_{n\geq1}$, i.e. $\pi^n=\{0=t^n_0<t^n_1<\ldots,t^n_{m(n)}=T\}$, $\pi^n\subset\pi^{n+1}$, $\abs{\pi^n}\limn\infty$.
We denote by $\Si(\Pi)$ the set of simple predictable processes whose jump times are covered by one of the partitions in $\Pi$\footnote{We could assume in more generality that the jump times are only covered by $\cup_{n\geq1}\pi^n$, but at the expense of more complicated formulas}: \begin{align*} \Si(\pi^n):={}&\bigg\{\phi:\;\forall i=0,\ldots,m(n)-1,\;\exists \l_i\,\F_{t^n_i}\mbox{-measurable }\R^d\mbox{-valued}\\ &\quad\mbox{random variable on }(\O,\F),\;\phi(t,\w)=\sum_{i=0}^{m(n)-1}\l_i(\w)\ind_{(t^n_i,t^n_{i+1}]}\bigg\},\\ \Si(\Pi):={}&\underset{n\geq1}\cup\Si(\pi^n). \end{align*}
\section{Self-financing strategies} \label{sec:self-fin}
\begin{definition}
$(V_0,\phi,\psi)$ is called a \emph{simple self-financing trading strategy} if it is a trading strategy such that $\phi\in\Si(\pi^n)$ for some $n\in\NN$ and \begin{align} \nonumber\psi(t,\w;\phi)={}&V_0-\phi(0+,\w)\cdot\w(0)-\sum_{i=1}^{m(n)-1}\w(t^n_i\wedge t)\cdot(\phi(t^n_{i+1}\wedge t,\w)-\phi(t^n_i\wedge t,\w)) \\
={}& V_0-\phi(0+,\w)\cdot\w(0)-\sum_{i=1}^{k(t,n)}\w(t^n_i)\cdot(\l_{i}(\w)-\l_{i-1}(\w)),\label{eq:psi-sf}
\end{align} where $\phi(t,\w)=\sum_{i=0}^{m(n)-1}\l_i(\w)\ind_{(t^n_i,t^n_{i+1}]}$ and $k(t,n):=\max\{i\in\{1,\ldots,m\}\;:\;t^n_i<t\}$. The \emph{gain} of such a strategy is defined at any time $t\in[0,T]$ by \begin{align*} G(t,\w;\phi):={}&\sum_{i=1}^{m(n)}\phi(t^n_{i}\wedge t,\w)\cdot(\w(t^n_{i}\wedge t)-\w(t^n_{i-1}\wedge t)) \\
={}& \sum_{i=1}^{k(t,n)}\l_{i-1}(\w)\cdot(\w(t^n_{i})-\w(t^n_{i-1}))+\l_{k(t,n)}(\w)\cdot(\w(t)-\w(t^n_{k(t,n)})).
\end{align*} \end{definition} In the following, when there is no ambiguity, we drop the dependence of $k$ on $t,n$ and write $k\equiv k(t,n)$.
Note that the definition \eq{psi-sf} is equivalent to requiring that the trading strategy $(V_0,\phi,\psi)$ satisfies $$V(t,\w;\phi,\psi)\equiv V(t,\w;\phi)=V_0+G(t,\w;\phi).$$
Since a simple self-financing trading strategy is uniquely determined by its initial investment and the asset position at all times, we will drop the dependence on $\psi$ of the quantities involved. For instance, when we are referring to a simple self-financing strategy $(V_0,\phi)$, we implicitly refer to the triplet $(V_0,\phi,\psi)$ with $\psi\equiv\psi(\cdot,\cdot;\phi)$ defined in \eq{psi-sf}.
\begin{remark} The portfolio value $V(\cdot,\cdot;\phi)$ of a simple self-financing strategy $(V_0,\phi,\psi)$ is a \cadlag\ $\FF$-adapted process on $(\O,\F)$, satisfying $$\Delta V(t,\w;\phi)=\phi(t,\w)\cdot\Delta\w(t),\quad \forall t\in[0,T],\w\in\O.$$ \end{remark} The right-continuity of $V$ comes from the definition \eq{psi-sf}, which implies, for all $t\in[0,T]$ and $\w\in\O$, $$\psi(t,\w)+\phi(t,\w)\cdot\w(t)=\psi(t+,\w)+\phi(t+,\w)\cdot\w(t).$$
Below, we are going to establish the self-financing conditions for (non\hyp simple) trading strategies.
\begin{definition}\label{def:path-sf}
Given an $\F_0$-measurable random variable $V_0:\O\to\R$ and an $\FF$-adapted $\R^d$-valued \caglad\ process $\phi=(\phi(t,\cdot))_{t\in(0,T]}$ on $(\O,\F)$, we say that $(V_0,\phi)$ is a \emph{self-financing trading strategy on} $U\subset\O$ if there exists a sequence of self-financing simple trading strategies $\{(V_0,\phi^n,\psi^n), n\in\NN\}$, such that $$\forall\w\in U,\,\forall t\in[0,T],\quad\phi^n(t,\w)\limn\phi(t,\w),$$ and any of the following conditions is satisfied:
\begin{enumerate}[(i)]
\item there exists an $\FF$-adapted real-valued \cadlag\ process $G(\cdot,\cdot;\phi)$ on $(\O,\F)$ such that, for all $t\in[0,T],\w\in U$, $$G(t,\w;\phi^n)\limn G(t,\w;\phi)\quad\text{and}\quad\Delta G(t,\w;\phi)=\phi(t,\w)\cdot\Delta\w(t);$$
\item there exists an $\FF$-adapted real-valued \cadlag\ process $\psi(\cdot,\cdot;\phi)$ on $(\O,\F)$ such that, for all $t\in[0,T],\w\in U$, $$\psi^n(t,\w)\limn\psi(t,\w;\phi)$$ and $$\psi(t+,\w;\phi)-\psi(t,\w;\phi)=-\w(t)\cdot\lf\phi(t+,\w)-\phi(t,\w)\rg;$$
\item there exists an $\FF$-adapted real-valued \cadlag\ process $V(\cdot,\cdot;\phi)$ on $(\O,\F)$ such that, for all $t\in[0,T],\w\in U$, $$V(t,\w;\phi^n)\limn V(t,\w;\phi)\quad\text{and}\quad\Delta V(t,\w;\phi)=\phi(t,\w)\cdot\Delta\w(t).$$
\end{enumerate} \end{definition}
\begin{remark}\label{rmk:path-sf} It is easy to see that the three conditions (i)-(iii) of Definition \ref{def:path-sf} are equivalent. If any of them is fulfilled, the limiting processes $G,\psi,V$ are respectively the gain, bond holdings and portfolio value of the self-financing strategy $(V_0,\phi)$ on $U$ and they satisfy, for all $t\in[0,T],\w\in U$, \begin{equation}
\label{eq:sf} V(t,\w;\phi)=V_0+G(t,\w;\phi)
\end{equation} and \begin{equation}
\label{eq:psi}
\psi(t,\w;\phi)=V_0-\phi(0+,\w)-\Limn\sum_{i=1}^{m(n)}\w(t^n_i\wedge t)\cdot(\phi^n(t^n_{i+1}\wedge t,\w)-\phi^n(t^n_{i}\wedge t,\w)). \end{equation} \end{remark} Equation \eq{sf} is the pathwise counterpart of the classical definition of self-financing in probabilistic financial market models. However, in our purely analytical framework, we couldn't take it directly as the self-financing condition because some prior assumptions are needed to define path-by-path the quantities involved.
\section{A plausibility requirement} \label{sec:reasonable}
The results reviewed in Subsection \ref{sec:arbitrage} cannot directly be applied to our framework, because the partitions considered there consist of stopping times, i.e. depend on the path, while we want to work with a fixed sequence of partitions $\Pi$ rather than with a random one. Nonetheless, we can deduce that if we consider a singleton $\{\w\}$, where $\w\in\O_\psi$ with $\O_\psi$ defined in \eq{mod-jumps}, and our sequence of partition is of dyadic type for $\w$, then the property of finite quadratic variation for $\w$ is necessary to prevent the existence of a positive capital process, according to \defin{upperP}, trading at times in $\Pi$, that starts from a finite initial capital but ends up with infinite capital at time $T$. However, the conditions imposed on the sequence of partitions are difficult to check.
Instead, we turn around the point of view: we want to keep our sequence of partitions $\Pi$ fixed and to identify the right subset of paths in $\O$ that is \emph{plausible} working with. To do so, we propose the following notion of \emph{plausibility} that, together with a technical condition on the paths, suggests that it is indeed a good choice to work on set of price paths with finite quadratic variation along $\Pi$, as we do in all the following sections. \begin{definition}
A set of paths $U\subset\O$ is called \emph{plausible} if there does not exist a sequence $(V_0^n,\phi^n)$ of simple self-financing strategies such that:
\begin{enumerate}[(i)]
\item the correspondent sequence of portfolio values, $\{V(t,\w;\phi^n)\}_{n\geq1}$, is non-decreasing for all paths $\w\in U$ at any time $t\in[0,T]$,
\item the correspondent sequence of initial investments $\{V^n_0(\w_0)\}_{n\geq1}$ converges for all paths $\w\in U$,
\item the correspondent sequence of gains along some path $\w\in U$ at the final time $T$ grows to infinity with $n$, i.e. $G(T,\w;\phi^n)\limn\infty$.
\end{enumerate} \end{definition}
\begin{proposition} Let $U\subset\O$ be a set of price paths satisfying, for all $(t,\w)\in[0,T]\times U$ and all $n\in\NN$, \begin{equation}\label{eq:cn-conv} \sum_{n=1}^\infty\lf\sum_{i=0}^{m(n-1)-1}\!\!\!\!\!\!\sum_{\stackrel{j,k:\,j\neq k,}{t^{n-1}_i\leq t_j^n,t^n_k<t^{n-1}_{i+1}}}\!\!\!\!\!\!(\w(t^n_{j+1}\wedge t)-\w(t^n_j\wedge t))\cdot(\w(t^n_{k+1}\wedge t)-\w(t^n_k\wedge t))\rg^- \end{equation} is finite, where $(x)^-:=\max\{0,-x\}$ denotes the negative part of $x\in\R$. Then, if $U$ is plausible, all paths $\w\in U$ have \fqv{\Pi}. \end{proposition}
\proof First, let us write explicitly what the condition \eq{cn-conv} means in terms of the relation between the $\w$ and the sequence of nested partitions $\Pi$. Let $d=1$ for sake of notation. Denote by $A^n$ the $n^{th}$-approximation of the quadratic variation along $\Pi$, i.e. $$A^n(t,\w):=\sum_{i=0}^{m(n)-1}(\w(t^n_{i+1}\wedge t)-\w(t^n_i\wedge t))^2\quad\forall(t,\w)\in[0,T]\times\O.$$ Then: \begin{align*}
&A^n(t,\w)-A^{n-1}(t,\w)=\\ ={}&\sum_{i=0}^{m(n)-1}(\w(t^n_{i+1}\wedge t)-\w(t^n_i\wedge t))^2-\sum_{i=0}^{m(n-1)-1}(\w(t^{n-1}_{i+1}\wedge t)-\w(t^{n-1}_i\wedge t))^2\\
={}&\sum_{i=0}^{m(n-1)-1}\!\lf\sum_{t^{n-1}_i\leq t_j^n< t^{n-1}_{i+1}}\!\!(\w(t^n_{j+1}\wedge t)-\w(t^n_j\wedge t))^2-(\w(t^{n-1}_{i+1}\wedge t)-\w(t^{n-1}_i\wedge t))^2\rg\\
={}&{}-2\sum_{i=0}^{m(n-1)-1}\!\sum_{\stackrel{j,k:\,j\neq k,}{t^{n-1}_i\leq t_j^n,t^n_k<t^{n-1}_{i+1}}}\!(\w(t^n_{j+1}\wedge t)-\w(t^n_j\wedge t))(\w(t^n_{k+1}\wedge t)-\w(t^n_k\wedge t)).
\end{align*} Thus the series in \eq{cn-conv} is exactly the series $\sum_{n=1}^\infty (A^n(t,\w)-A^{n-1}(t,\w))^-$. Now, for $n\in\NN$, let us define a simple predictable process $\phi^n\in\Si(\pi^n)$ by
\begin{align}
\label{eq:Vn}
\phi^n(t,\w):={}&{}-2\sum_{i=0}^{m(n)-1}\w(t^n_i)\ind_{(t^n_i,t^n_{i+1}]}(t)
\end{align} Then, we can rewrite the $n^{\mathrm{th}}$ approximation of the quadratic variation of $\w$ at time $t\in[0,T]$ as \begin{align}
A^n(t,\w)={}&\w(t)^2-\w(0)^2-2\sum_{i=0}^{m(n)-1}\w(t^n_i)(\w(t^n_{i+1}\wedge t)-\w(t^n_i\wedge t))\nonumber\\ ={}&\w(t)^2-\w(0)^2+G(t,\w;\phi^n)\nonumber\\ ={}&V(t,\w;\phi^n)-c_n, \label{eq:An} \end{align} where $c_n=\w(0)^2-\w(t)^2+V^n_0(\w_0)$. We want to define the initial capitals $V^n_0$ in such a way that the sequence of simple self-financing strategies $(V_0^n,\phi^n)$ has non decreasing portfolio values at any time and the sequence of initial capitals converges. By writing \begin{equation}\label{eq:kn} A^n(t,\w)-A^{n-1}(t,\w)+k_n=V(t,\w;\phi^n)-V(t,\w;\phi^{n-1}), \end{equation} where $k_n=c_n-c_{n-1}=V^{n}_0(\w_0)-V^{n-1}_0(\w_0)$, we see that the monotonicity of $\{V(t,\w;\phi^n)\}_{n\in\NN}$ is obtained by opportunely choosing a finite $k_n\geq0$ (i.e. by choosing $V^n_0$), which is made possible by the boundedness of $\abs{A^n(t,\w)-A^{n-1}(t,\w)}$, implied by condition \eq{cn-conv}. However, it is not sufficient to have $k_n<\infty$ for all $n\in\NN$, but we need the convergence of the series $\sum_{n=1}^\infty k_n$. This is provided again by condition \eq{cn-conv}, because the minimum value of $k_n$ satisfying the positivity of \eq{kn} for all $t\in[0,T]$ is indeed $\max_{t\in[0,T]}(A^n(t,\w)-A^{n-1}(t,\w))^-$.
On the other hand, since both the sequence $\{V(t,\w;\phi^n)\}_{n\geq1}$ for any $t\in[0,T]$ and the sequence $\{V^n_0\}_{n\geq1}$ are regular, i.e. they have limit for $n$ going to infinity, by \eq{An} the sequence $\{A^n(t,\w)\}_{n\geq1}$ is also regular. Finally, since the sequence of initial capitals converges, the equation \eq{An} implies that the sequence of approximations of the quadratic variation of $\w$ converges if and only if $\{G(T,\w;\phi^n)\}_{n\geq1}$ converges. But $U$ is a plausible set by assumption, thus convergence must hold. \endproof
\section{Pathwise construction of the gain process} \label{sec:gain}
In the following two propositions we show that we can identify a special class of (pathwise) self-financing trading strategies, respectively on the set of continuous price paths with \fqv{\Pi} and on the set of \cadlag\ price paths with \fqv{\Pi}, whose gain is computable path-by-path as a limit of Riemann sums.
\begin{proposition}[Continuous price paths]\label{prop:G}
Let $\phi=(\phi(t,\cdot))_{t\in(0,T]}$ be an $\FF$-adapted $\R^d$-valued \caglad\ process on $(\O,\F)$ such that there exists $F\in\Cloc(\L_T)\cap\CC^{0,0}(\W_T)$ satisfying
\begin{equation}
\phi(t,\w)=\vd F(t,\w_{t})\quad\forall\w\in Q(\O,\Pi),t\in[0,T].
\end{equation} Then, there exists a \cadlag\ process $G(\cdot,\cdot;\phi)$ such that, for all $\w\in Q(\O^0,\Pi)$ and $t\in[0,T]$,
\begin{align}
G(t,\w;\phi)={}&\int_0^t \phi(u,\w_u)\cdot\mathrm{d}^{\Pi}\w \label{eq:fi}\\
\label{eq:pathint}
={}&\lim_{n\rightarrow\infty}\sum_{t^n_i\leq t}\vd F(t_i^n,\w^{n}_{t^n_i-})\cdot(\w(t_{i+1}^n\wedge T)-\w(t_i^n\wedge T)),
\end{align} where $\w^n$ is defined as in \eq{wn}. Moreover, $\phi$ is the asset position of a pathwise self-financing trading strategy on $Q(\O^0,\Pi)$ with gain process $G(\cdot,\cdot;\phi)$. \end{proposition} \begin{proof}
First of all, under the assumptions, the change of variable formula for functionals of continuous paths holds (\citep[Theorem 3]{contf2010}), which ensures the existence of the limit in \eq{pathint} and provide us with the definition of the F\"ollmer integral in \eq{fi}. Then, we observe that each $n^{th}$ sum in the right-hand side of \eqref{eq:pathint} is exactly the accumulated gain of a pathwise self-financing strategy which trades only a finite number of times. Precisely, let us define, for all $\w\in\O$ and all $t\in[0,T)$, \begin{align*}
\phi^n(t,\w):={}&\phi(0+,\w)\ind_{\{0\}}(t)+\sum_{i=0}^{m(n)-1}\phi\left(t^n_{i},\w^{n}_{t^n_i}\right)\ind_{(t^n_{i},t^n_{i+1}]}(t),\\ \intertext{and} \psi^n(t,\w):={}& V_0-\phi(0+,\w)-\sum_{i=1}^{m(n)-1}\w(t^n_{i}\wedge t)\cdot(\phi^n(t^n_{i+1}\wedge t,\w)-\phi^n(t^n_{i}\wedge t,\w)), \end{align*} then $(V_0,\phi^n,\psi^n)$ is a simple self-financing strategy, with cumulative gain $G(\cdot,\cdot;\phi^n)$ given by \begin{align*} G(t,\w;\phi^n)={}&\sum_{i=1}^{k}\vd F\lf t^n_{i-1},\w^{n}_{t^n_{i-1}-}\rg\cdot(\w(t^n_{i})-\w(t^n_{i-1}))\\ &\,+\vd F\lf t^n_{k},\w^{n}_{t^n_{k}-}\rg\cdot(\w(t)-\w(t^n_{k})). \end{align*} and portfolio value $V(\cdot,\cdot;\phi^n)$ given by $$V(t,\w;\phi^n)=\psi^n(t,\w)+\phi^n(t,\w)\cdot\w(t)=V_0+G(t,\w;\phi^n).$$ Then, we have to verify that the simple asset position $\phi^n$ converges pointwise to $\phi$, i.e.
$$\forall\w\in\O,\,\forall t\in[0,T],\quad|\phi^n(t,\w)-\phi(t,\w)|\limn0.$$ This is true, because by assumption $\vd F\in\CC_l^{0,0}(\L_T)$ and this implies that the path $t\mapsto\vd F(t,\w_{t-})=\vd F(t,\w_{t})$ is left-continuous (see \rmk{regularity}). Indeed, for each $t\in[0,T],\w\in\O$ and $\e>0$, there exist $\bar n\in\NN$ and $\eta>0$, such that, for all $n\geq\bar n$,
$$\dinf\left((t^n_k,\w^{n}_{t_k^n-}),(t,\w)\right)=\max\left\{||\w^n_{t^n_k-},\w_{t^n_k-}||_\infty,\sup_{u\in[t^n_k,t)}|\w(t^n_k)-\w(u)|\right\}+|t-t_k^n|<\eta,$$ where $k\equiv k(t,n):=\max\{i\in\{1,\ldots,m\}\;:\;t^n_i<t\}$, and \begin{align*}
|\phi^n(t,\w)-\phi(t,\w)|={}&\abs{\phi(t_k^n,\w_{t_k^n}^{n})-\phi(t,\w)}\\ ={}&\abs{\vd F(t_k^n,\w_{t_k^n-}^{n})-\vd F(t,\w)}\\ \leq{}&\e. \end{align*} We have thus built a sequence of self-financing simple trading strategies approximating $\phi$ and, if the realized price path $\w$ is continuous with finite quadratic variation along $\Pi$, then the gain of the simple strategies converges to a real-valued \cadlag\ function $G(\cdot,\w;\phi)$. Namely, for all $t\in[0,T]$ and $\w\in Q(\O^0,\Pi)$, $$G(t,\w;\phi^n)\limn G(t,\w;\phi),\quad G(t,\w;\phi)=\int_0^t\vd F(u,\w)\cdot\mathrm{d}^{\Pi}\w.$$ Moreover, by the assumptions on $F$ and by \rmk{regularity}, the map $t\mapsto F(t,\w_{t})$ is continuous for all $\w\in C([0,T],\R^d)$. Therefore, by the change of variable formula for functionals of continuous paths, $G(\cdot,\w;\phi)$ is continuous for all $\w\in Q(\O^0,\Pi)$.
Thus, the process $G(\cdot,\cdot;\phi)$ satisfies the condition (i) in Definition \ref{def:path-sf} and so it is the gain process of the self-financing trading strategy with initial value $V_0$ and asset position $\phi$, on the set of continuous paths with \fqv{\Pi.} \end{proof}
\begin{corollary}\label{cor:path-sf}
Let $\phi$ be as in \prop{G}, then $\psi(\cdot,\cdot;\phi)$, defined for all $t\in[0,T]$ and $\w\in Q(\O^0,\Pi)$ by
\begin{align*}
\psi(t,\w;\phi)={}&V_0-\phi(0+,\w)\\ &{}-\Limn\sum_{i=1}^{k(t,n)}\w(t^n_i)\cdot\lf\vd F\lf t^n_{i},\w^{n}_{t^n_{i}-}\rg-\vd F\lf t^n_{i-1},\w^{n}_{t^n_{i-1}-}\rg\rg,
\end{align*} is the bond holding process of the self-financing trading strategy $(V_0,\phi)$ on $Q(\O^0,\Pi)$. \end{corollary}
\begin{proposition}[C\`adl\`ag price paths]\label{prop:G-cadlag}
Let $\phi=(\phi(t,\cdot))_{t\in(0,T]}$ be an $\FF$-adapted $\R^d$-valued \caglad\ process on $(\O,\F)$ such that there exists $F\in\Cloc(\L_T)\cap\CC^{0,0}_r(\L_T)$ with $\vd F\in\CC^{0,0}(\L_T)$, satisfying
\begin{equation*}
\phi(t,\w)=\vd F(t,\w_{t-})\quad\forall\w\in Q(\O,\Pi),\,t\in[0,T].
\end{equation*} Then, there exists a \cadlag\ process $G(\cdot,\cdot;\phi)$ such that, for all $\w\in Q(\O,\Pi)$ and $t\in[0,T]$,
\begin{align}
\nonumber
G(t,\w;\phi)={}&\int_0^t \phi(u,\w_u)\cdot\mathrm{d}^{\Pi}\w \\
\label{eq:pathint-cadlag}
={}&\lim_{n\rightarrow\infty}\sum_{t^n_i\leq t}\vd F\lf t_i^n,\w^{n,\De\w(t_i^n)}_{t^n_i-}\rg\cdot(\w(t_{i+1}^n\wedge T)-\w(t_i^n\wedge T)),
\end{align} where $\w^n$ is defined as in \eq{wn}. Moreover, $\phi$ is the asset position of a pathwise self-financing trading strategy on $Q(\O,\Pi)$ with gain process $G(\cdot,\cdot;\phi)$. \end{proposition} \begin{proof} The proof follows the lines of that of \prop{G}, using the change of variable formula for functionals of \cadlag\ paths instead of continuous paths, which entails the definition of the pathwise integral \eq{pathint-cadlag}. For all $\w\in\O$ and $t\in[0,T]$, we define \begin{align*}
\phi^n(t,\w):={}&\phi(0,\w)\ind_{\{0\}}(t)+\sum_{i=0}^{m(n)-1}\phi\left(t^n_{i}+,\w^{n,\De\w(t^n_{i})}_{t^n_i-}\right)\ind_{(t^n_{i},t^n_{i+1}]}(t)\\ \intertext{and} \psi^n(t,\w):={}& V_0-\phi(0+,\w)-\sum_{i=1}^{m(n)-1}\w(t^n_{i}\wedge t)\cdot(\phi^n(t^n_{i+1}\wedge t,\w)-\phi^n(t^n_{i}\wedge t,\w)). \end{align*} then $(V_0,\phi^n,\psi^n)$ is a simple self-financing strategy, with cumulative gain $G(\cdot,\cdot;\phi^n)$ given by \begin{align*} G^n(t,\w)={}&\sum_{i=1}^{k}\vd F\lf t^n_{i-1},\w^{n,\De\w(t^n_{i-1})}_{t^n_{i-1}-}\rg\cdot(\w(t^n_{i})-\w(t^n_{i-1}))\\ &\,+\vd F\lf t^n_{k},\w^{n,\De\w(t^n_{k})}_{t^n_{k}-}\rg\cdot(\w(t)-\w(t^n_{k})), \end{align*} Finally, we verify that
$$\forall\w\in\O,\,\forall t\in[0,T],\quad|\phi^n(t,\w)-\phi(t,\w)|\limn0.$$ This is true, by the left-continuity of $\vd F$: for each $t\in[0,T],\w\in\O$ and $n\in\N$, we have that $\forall \e>0$, $\exists \eta=\eta(\e)>0$, $\exists\bar n=\bar n(t,\eta)\in\NN$ such that, $\forall n\geq\bar n$,
$$\dinf\left(\w^{n,\De\w(t^n_k)}_{t_k^n-},\w_{t-}\right)=\max\left\{||\w^n_{t^n_k-},\w_{t^n_k-}||_\infty,\sup_{u\in[t^n_k,t)}|\w(t^n_k)-\w(u)|\right\}+|t-t_k^n|<\eta,$$ hence \begin{align*}
|\phi^n(t,\w)-\phi(t,\w)|={}&\abs{\lim_{s\searrow t^n_k}\phi(s,\w_{t_k^n-}^{n,\De\w(t^n_k)})-\phi(t,\w)}\\ ={}&\lim_{s\searrow t^n_k}\abs{\vd F(s,\w_{t_k^n-}^{n,\De\w(t^n_k)})-\vd F(t,\w_{t-})}\\ \leq&\e. \end{align*} Therefore: $$G(t,\w;\phi^n)=\limn G(t,\w;\phi),\quad G(t,\w;\phi)=\int_{(0,t]}\vd F(u,\w_{u-})\cdot\mathrm{d}^{\Pi}\w,$$ where $G(t,\w;\phi)$ is an $\FF$-adapted real-valued process on $(\O,\F)$. Moreover, by the change of variable formula \eq{fif-d} and \rmk{regularity}, it is \cadlag\ with left-side jumps \begin{align*}
\De G(t,\w;\phi)={}&\lim_{s\nearrow t}(G(t,\w;\phi)-G(s,\w;\phi))\\
={}& F(t,\w_{t})- F(t,\w_{t-})-\lf F(t,\w_{t})- F(t,\w_{t-})-\vd F(t,\w_{t-})\cdot\De\w(t)\rg\\
={}&\vd F(t,\w_{t-})\cdot\De\w(t). \end{align*} \end{proof}
\begin{corollary}
Let $\phi$ be as in \prop{G-cadlag}, then $\psi(\cdot,\cdot;\phi)$, defined for all $t\in[0,T]$ and $\w\in Q(\O,\Pi)$ by \begin{align*} \psi(t,\w;\phi)={}&V_0-\phi(0+,\w)\\ &{}-\Limn\sum_{i=1}^{k(t,n)}\w(t^n_i)\cdot\lf\vd F_{t^n_{i}}\lf\w^{n,\De\w(t^n_{i})}_{t^n_{i}-}\rg-\vd F_{t^n_{i-1}}\lf\w^{n,\De\w(t^n_{i-1})}_{t^n_{i-1}-}\rg\rg \end{align*} is the bond position process of the trading strategy $(V_0,\phi,\psi)$ which is self-financing on $Q(\O,\Pi)$. \end{corollary}
\section{Pathwise replication of contingent claims} \label{sec:replication}
A non-probabilistic replication result restricted to the non-path-dependent case was obtained by \citet{bickwill}, as shown in Propositions \ref{prop:bw1},\ref{prop:bw2} in \Sec{pathint} of this thesis. Here, we state the generalization to the replication problem for path-dependent contingent claims.
First, let us introduce the notation.
\begin{definition}\label{def:hedging_error}
The \emph{hedging error} of a self-financing trading strategy $(V_0,\phi)$ on $U\subset D([0,T],\R^d_+)$ for a path-dependent derivative with payoff $H$ in a scenario $\w\in U$ is the value $$V(T,\w;\phi)-H(\w)=V_0(\w)+G(T,\w;\phi)-H(\w).$$ $(V_0,\phi)$ is said to \emph{replicate} $H$ on $U$ if its hedging error for $H$ is null on $U$, while it is called a \emph{super-strategy} for $H$ on $U$ if its hedging error for $H$ is non-negative on $U$, i.e. $$V_0(\w)+ G(T,\w;\phi)\geq H(\w_T)\quad\forall\w\in U.$$ \end{definition}
For any \cadlag\ function with values in $\S^+(d)$, say $A\in D([0,T],\S^+(d))$, we denote by
$$Q_A(\Pi):=\left\{\w\in Q(\O,\Pi):\;[\w](t)=\int_0^tA(s)\mathrm{d} s\quad\forall t\in[0,T]\right\}$$ the set of functions with finite quadratic variation along $\Pi$ and whose quadratic variation is absolutely continuous with density $A$. Note that the elements of $Q_A(\Pi)$ are continuous, by \eq{qv-jumps}.
\begin{proposition}\label{prop:hedge} Consider a path-dependent contingent claim with exercise date $T$ and a continuous payoff functional $H:(\O,\norm{\cdot}_\infty)\mapsto\R$. Assume that there exists a \naf\ $F\in\Cloc(\W_T)\cap\CC^{0,0}(\W_T)$ that satisfies \begin{equation}\label{eq:fpde1} \left\{\bea{ll} \hd F(t,\w)+\frac12\tr\lf A(t)\cdot\vd^2F(t,\w)\rg=0,& t\in[0,T),\w\in Q_A(\Pi)\\ F(T,\w)=H(\w).& \end{array}\right. \end{equation} Let $\tilde A\in D([0,T],\S^+(d))$. Then, the hedging error of the trading strategy $(F(0,\cdot),\vd F)$, self-financing on $Q(\O^0,\Pi)$, for $H$ in any price scenario $\w\in Q_{\tilde A}(\Pi)$, is \begin{equation}\label{eq:err}
\frac12\int_{0}^T\tr\lf (A(t)-\tilde A(t))\vd^2F(t,\w)\rg \mathrm{d} t. \end{equation} In particular, the trading strategy $(F(0,\cdot),\vd F)$ replicates the contingent claim $H$ on $Q_A(\Pi)$ and its portfolio value at any time $t\in[0,T]$ is given by $F(t,w_t)$. \end{proposition}
\begin{proof}[Proof]
By \prop{G}, the gain at time $t\in[0,T]$ of the trading strategy $(F(0,\cdot),\vd F)$ in a price scenario $\w\in Q(\O^0,\Pi)$ is given by
$$G(t,\w;\vd F)=\int_0^t\vd F(u,\w_u)\cdot\mathrm{d}^{\Pi}\w(u).$$ Moreover, this strategy is self-financing on $Q(\O^0,\Pi)$, hence, by Remark \ref{rmk:path-sf}, its portfolio value at any time $t\in[0,T]$ in any scenario $\w\in Q(\O^0,\Pi)$ is given by $$V(t,\w)=F(0,\w_{0})+\int_0^t\vd F(u,\w_u)\cdot\mathrm{d}^{\Pi}\w.$$ In particular, since $F$ is smooth, we can apply the change of variable formula for functionals of continuous paths. This, by using the functional partial differential equation \eqref{eq:fpde1}, for all $\w\in Q_{\tilde A}(\Pi)$, gives \begin{align*} V(T,\w)={}&F(0,\w_{0})+\int_{0}^T\vd F(t,\w)\cdot\mathrm{d}^{\Pi}\w \\
={}&F(T,\w_T)-\int_{0}^T\hd F(t,\w)\mathrm{d} t-\frac12\int_{0}^T\tr\lf \tilde A(t)\vd^2F(t,\w)\rg \mathrm{d} t\\ ={}&H-\frac12\int_{0}^T\tr\lf(\tilde A(t)-A(t))\vd^2F(t,\w)\rg \mathrm{d} t. \end{align*} \end{proof}
\section{Pathwise isometries and extension of the pathwise integral} \label{sec:isometry} \sectionmark{Pathwise isometries and extension of the pathwise integral}
We denote $\mathring Q(\O,\Pi)$ the set of price paths $\w$ of non-trivial finite quadratic variation, that is $\w\in Q(\O,\Pi)$ such that $[\w](T)>0$. Then, given $\w\in \mathring Q(\O,\Pi)$, we consider the measure space $([0,T],\B([0,T]),\mathrm{d}[\w])$, where $\B([0,T])$ is the family of Borel sets of\OT\ and $\mathrm{d}[\w]$ denotes the finite measure on \OT\ associated with $[\w]$. Here, we define the space of measurable $\R^d$-valued functions on $[0,T]$ with finite second moment with respect to the measure $\mathrm{d}[\w]$, that is \begin{align*} \mathfrak L^2([0,T],[\w]):=\bigg\{&f:([0,T],\B([0,T]))\to\R^d\mbox{ measurable}:\\ &\int_0^T\pqv{f(t)\,^t\!f(t),\mathrm{d}[\w](t)} <\infty\bigg\}, \end{align*} where $\pqv{\cdot}$ denotes the Frobenius inner product, i.e. $\pqv{A,B}=\tr(^t\!AB)=\sum_{i,j}A_{i,j}B_{i,j}$. Then, consider the set \begin{align*} \mathfrak L^2(\FF,[\w]):=\big\{&\phi\;\R^d\mbox{-valued, progressively measurable process on }(\O,\F,\FF),\\ &\phi(\cdot,\w)\in\mathfrak L^2([0,T],[\w])\big\} \end{align*} and
we equip it with the following semi-norm: $$\norm{\phi}^2_{[\w],2}:=\int_0^T\pqv{\phi(t,\w)\,^t\!\phi(t,\w),\mathrm{d}[\w](t)},\quad \phi\in\mathfrak L^2(\FF,[\w])$$
We also define the quotient of the space of real-valued paths with finite quadratic variation by its subspace of paths with zero quadratic variation: $$\bar Q(D([0,T],\R),\Pi):=Q(D([0,T],\R),\Pi)/ker([\cdot](T)),$$ where $ker([\cdot](T))=\{v\in Q(D([0,T],\R),\Pi):\;[v](T)=0\}$.
\begin{proposition}\label{prop:Iw}
For any price path $\w\in\mathring Q(\O,\Pi)$, let us define the pathwise integral operator \begin{eqnarray} \nonumber
I^\w:\left(\bar\Si(\Pi),\norm{\cdot}_{[\w],2}\right)\!\!\!\!&\to&\lf \bar Q(D([0,T],\R),\Pi),\sqrt{[\cdot](T)}\rg\\ \phi&\mapsto&\int\phi\cdot\mathrm{d}^\Pi\w, \label{eq:Iw1} \end{eqnarray} where $\bar\Si(\Pi):=\Si(\Pi)/ker(\norm{\cdot}_{[\w],2})$ and \begin{align*}
ker(\norm{\cdot}_{[\w],2})=\bigg\{&z=(z^1,\ldots,z^d)\in\mathfrak L^2(\FF,[\w]):\;\forall i,j=1,\ldots,d,\;\\ &[\w]_{i,j}\lf\{t\in[0,T]:\,z^i(t,\w)\neq0,\,z^j(t,\w)\neq0\}\rg=0\bigg\}. \end{align*}
$I^\w$ is an isometry between two normed spaces: \begin{equation} \forall\phi\in\bar\Si(\Pi),\quad\left[\int\phi\cdot\mathrm{d}^\Pi\w\right](T)=\int_0^T\pqv{\phi(t,\w)^t\!\phi(t,\w),\mathrm{d}[\w](t)}.\label{eq:iso} \end{equation} Moreover, $I^w$ admits a closure on $L^2(\FF,[\w]):=\mathfrak L^2(\FF,[\w])/ker(\norm{\cdot}_{[\w],2})$, that is the isometry \begin{equation}
\label{eq:Iw2}\bea{rcl}
\tilde I^\w:\lf L^2(\FF,[\w]),\norm{\cdot}_{[\w],2}\rg&\to&\lf\bar Q(\O,\Pi),\sqrt{[\cdot](T)}\rg,\\\phi&\mapsto&\int\phi\cdot\mathrm{d}^\Pi\w. \end{array} \end{equation}
\end{proposition}
\proof The space $\left(\mathfrak L^2(\FF,[\w]),\norm{\cdot}_{[\w],2}\right)$ is a semi-normed space and its quotient with respect to the kernel of $\norm{\cdot}_{[\w],2}$ is a normed space, which is also a Banach space by the Riesz-Fischer theorem. Moreover, for any $\phi\in\Si(\Pi)$, it holds \begin{align*} & \int_0^T\pqv{\phi(t,\w)^t\!\phi(t,\w),\mathrm{d}[\w](t)}=\\ ={}& \sum_{i=1}^{m(n)}tr\lf\phi(t^n_i,\w)^t\!\phi(t^n_i,\w)([\w](t^n_i)-[\w](t^n_{i-1}))\rg\\ ={}&\sum_{i=1}^{m(n)}tr\lf\phi(t^n_i,\w)^t\!\phi(t^n_i,\w)\lim_{m\to\infty}\sum_{t^n_{i-1}<t^m_j\leq t^n_i}(\w(t^m_j)-\w(t^m_{j-1}))^t\!(\w(t^m_j)-\w(t^m_{j-1}))\rg\\ ={}&\lim_{m\to\infty}\sum_{t^m_j\in\pi^m}tr\lf\phi(t^m_j,\w)^t\!\phi(t^m_j,\w)(\w(t^m_j)-\w(t^m_{j-1}))^t\!(\w(t^m_j)-\w(t^m_{j-1}))\rg\\ ={}&\lim_{m\to\infty}\sum_{t^m_j\in\pi^m}\lf\int_{t^m_{j-1}}^{t^m_j}\phi(\cdot,\w)\cdot\mathrm{d}^\Pi\w\rg^2\\ ={}&\left[\int\phi(\cdot,\w)\cdot\mathrm{d}^\Pi\w\right](T). \end{align*}
Finally, since $\lf \bar Q(D([0,T],\R),\Pi),\sqrt{[\cdot](T)}\rg$ is a Banach space and $\bar\Si(\Pi)$ is dense in $\lf L^2(\FF,[\w]),\norm{\cdot}_{[\w],2}\rg$, we can uniquely extend the isometry $I^\w$ in \eq{Iw1} to the isometry $\tilde I^\w$ in \eq{Iw2}. \endproof
\begin{remark}
For any $\w\in\mathring Q(\O,\Pi)$ and any $\phi\in L^2(\FF,[\w])$, the pathwise integral of $\phi$ with respect to $\w$ along $\Pi$ is given by a limit of Riemann sums:
\begin{equation} \int\phi\cdot\mathrm{d}^\Pi\w =\Limn \sum_{t^n_i\in\pi^m}\phi^n(t^n_i,\w)\cdot(\w(t^n_i)-\w(t^n_{i-1})), \end{equation} independently of the sequence $(\phi^n)_{n\geq1}\in\bar\Si(\Pi)$ such that $$\norm{\phi^n(\cdot,\w)-\phi(\cdot,\w)}_{[\w],2}\limn0.$$ \end{remark} Indeed, the definition of the isometry in \eq{Iw2} entails that, given $\phi(\cdot,\w)\in L^2(\FF,[\w])$, for any sequence $(\phi^n(\cdot,\w))_{n\geq1}\in\bar\Si(\Pi)$ such that $$\norm{\phi^n(\cdot,\w)-\phi(\cdot,\w)}_{[\w],2}\limn0,$$ then \begin{equation}\label{eq:limqv} \left[\sum_{t^n_i\in\pi^m}\phi^n(t^n_i,\w)\cdot(\w(t^n_i)-\w(t^n_{i-1})) - \int\phi\cdot\mathrm{d}^\Pi\w\right](T)\limn0. \end{equation} Since $\sqrt{[\cdot](T)}$ defines a norm on $ \bar Q(D([0,T],\R),\Pi)$, \eq{limqv} implies that the pathwise integral of $\phi$ with respect to $\w$ along $\Pi$ is a pointwise limit of Riemann sums: $$\int\phi\cdot\mathrm{d}^\Pi\w=\Limn\sum_{t^n_i\in\pi^m}\phi^n(t^n_i,\w)\cdot(\w(t^n_i)-\w(t^n_{i-1})),$$ independently of the chosen approximating sequence $(\phi^n)_{n\geq1}$.
\chapter{Pathwise Analysis of dynamic hedging strategies} \label{chap:robust} \chaptermark{Pathwise Analysis of dynamic hedging}
The issue of model uncertainty and its impact on the pricing and hedging of derivative securities has been the focus of a lot of research in the quantitative finance literature (see e.g. \citet{avlevyparas,bickwill,cont2006,lyons}). Starting with Avellaneda et al.'s Uncertain Volatility Model \cite{avlevyparas}, the literature has focused on the analysis of the performance of pricing and hedging simple payoffs under model uncertainty. The dominant approach in this stream of literature was to replace the assumption of a given, known, probability measure by a family of probability measures which reflects model uncertainty, and look for bounds on prices and performance measures for trading strategies using a worst-case analysis across the family of possible models.
A typical problem to consider is the hedging of a contingent claim. Consider a market participant who issues a contingent claim with payoff $H$ and maturity $T$ on some underlying asset. To price and hedge this claim, the issuer uses a pricing model (say, Black-Scholes), computes the price as
$$ V_t = E^{\mathbb{Q}}[H|{\cal F}_t]$$ and hedges the resulting profit and loss using the hedging strategy derived from the same model (say, Black-Scholes delta hedge for $H$). However, the {\it true} dynamics of the underlying asset may, of course, be different from the assumed dynamics. Therefore, the hedger is interested in a few questions: How good is the result of the model-based hedging strategy in a realistic scenario? How 'robust' is it to model mis-specification? How does the the hedging error relate to model parameters and option characteristics? In 1998, \citet{elkaroui} provided an answer to these questions in the case of non-path-dependent options in the context of Markovian diffusion models. They provided an explicit formula for the profit and loss of the hedging strategy. \citet{elkaroui} showed that, when the underlying asset follows a Markovian diffusion $$\mathrm{d} S_t= \mu(t)S(t)\mathrm{d} t+ S(t)\sigma_0(t,S(t)) \mathrm{d} W(t) \qquad \text{under}\ \mathbb{P}^0,$$
a hedging strategy computed in a (mis-specified) local volatility model with volatility $\sigma$: $$\mathrm{d} S_t= r(t)S(t)\mathrm{d} t+ S(t)\sigma(t,S(t)) \mathrm{d} W(t) \qquad \text{under}\ \mathbb{Q}^\sigma$$
leads, under some technical conditions on $\sigma,\sigma_0$ to a P\&L equal to \begin{equation}\label{eq:elk} \int_0^T \frac{\sigma^{2}(t,S(t))-\sigma_0^2(t,S(t))}{2}S(t)^2e^{\int_t^T r(s)\mathrm{d} s}\overbrace{\partial^2_{xx}f(t,S(t))}^{\Gamma(t)}\mathrm{d} t. \end{equation} $\mathbb{P}^0-$almost surely. This fundamental result, called by Mark Davis \lq the most important equation in option pricing theory\rq\ \cite{davis}, shows that the exposure of a mis-specified delta hedge over a short time period is proportional to the Gamma of the option times the specification error measured in quadratic variation terms.
In this chapter, we contribute to this line of analysis by developing a general framework for analyzing the performance and robustness of delta hedging strategies for path-dependent derivatives across a given set of scenarios. Our approach is based on the pathwise financial framework introduced in \chap{path-trading}, which takes advantage of the non-anticipative functional calculus developed in \cite{contf2010}, which extends F\"ollmer's pathwise approach to \ito\ calculus \cite{follmer} to a functional setting. Our setting allows for general path-dependent payoffs and does not require any probabilistic assumption on the dynamics of the underlying asset, thereby extending previous results on robustness of hedging strategies in the setting of diffusion models to a much more general setting which is closer to the scenario analysis approach used by risk managers. We obtain a pathwise formula for the hedging error for a general path-dependent derivative and provide sufficient conditions ensuring the robustness of the delta hedge. Under the same conditions, we show that discontinuities in the underlying asset always deteriorate the hedging performance. We show in particular that robust hedges may be obtained in a large class of continuous exponential martingale models under a vertical convexity condition on the payoff functional. We apply these results to the case of hedging strategies for Asian options and barrier options, both in the Black Scholes model with time-dependent volatility and in a model with path-dependent characteristics, the Hobson-Rogers model \cite{hobson-rogers}.
\section{Robustness of hedging under model uncertainty: a survey}
\subsection{Hedging under uncertain volatility}
Two fundamental references in the literature on model uncertainty are \citet{avlevyparas} and \citet{lyons}. \citet{avlevyparas} proposed a novel approach to pricing and hedging under \lq volatility risk\rq: the \emph{Uncertain Volatility Model}. Instead of looking for the most accurate model (in terms of forward volatility of asset prices), they work under the assumption that the volatility is bounded between two extreme values. In particular, they assume that future stock prices are \ito\ processes \begin{equation}\label{eq:unvol} \mathrm{d} S(t)=S(t)\lf\s(t)\mathrm{d} W(t)+\mu(t)\mathrm{d} t\rg, \end{equation} where $\mu,\s$ are adapted process such that $\s_{\min}\leq\s\leq\s_{\max}$ and $W$ is a standard Brownian motion. The problem under consideration was the pricing and hedging of a derivative security paying a stream of cash-flows at $N$ future dates: $f_1(S(t_1)),\ldots,f_N(S(t_N))$, where $f_j$ are known functions. By denoting $\P$ the class of probability measures on the set of paths under which the coordinate process $S$ has a dynamics \eq{unvol} for some $\s$ between the bounds, then in absence of arbitrage opportunities it is possible to construct an optimal (in the sense that the initial cost is minimal) self-financing portfolio that hedges a short position in the derivative and gives a non-negative value after paying out all the cash flows. This optimal portfolio consists of an initial capital $p^+(t,S(t))$ and a risky position $\partial_{S}p^+(t,S(t))$, where $p^+(t,S(t))=\sup_{\PP\in\P}\EE^\PP\left[\sum_{j=1}^Ne^{-r(t_j-t)}f_j(S(t_j))\right]$ is obtained by solving the Black-Scholes-Barenblatt equation \begin{align*} \partial_{t}p^+(t,S(t))+\frac12S(t)^2\s^*\lf\partial_{SS}p^+(t,S(t))\rg^2\partial_{SS}p^+(t,S(t))\\ =-\sum_{k=1}^{N-1}f_j(S(t))\d_{t_k}(t),\quad t<t_N, \end{align*} with final condition $p^+(t,S(t))=f_N(S(t_N))$ where the function $\s^*$ is defined as $\s^*(s)=\s_{\min}\ind_{(-\infty,0)}(s)+\s_{\max}\ind_{[0,\infty)}(s)$.
On the other hand, \citet{lyons} analyzes the same problem of \citet{avlevyparas} but uses a pathwise approach, in view of F\"ollmer's formula \eq{follmer_ito}. The security process $S$ is multi-dimensional and the only assumption is that it has finite quadratic variation at any time $t\geq0$ along the sequence of dyadic partitions and that the quadratic variation function $A=\{A_{i,j}\}_{i,j\in I}$ is such that, for all $u\geq0$, $A(u)$ belongs to the set $$\bea{rl}O(\l,\L,K(u,S(u)):=&\!\!\!\left\{\g=\{\g_{i,j}\}_{i,j\in I}\text{ positive symmetric matrix, }\right.\\ &\left.\forall v\in\R^I_+,\; \l\,^t\!vK(u,S(u))v<^t\!v\g v<\L\,^t\!vK(u,S(u))v\right\}, \end{array}$$ where $\l\leq1,\L\geq1$ are given constants and $K$ is a reference model for the squared volatility of the security, e.g. $K_{i,j}(t,s)=\s_{i,j}(t,s)s_is_j$. The main result in \cite{lyons} claims that there exists a hedging strategy with an initial investment $f(0,S(0))$ that replicates a derivative paying $F(\t,S(\t))$ at the first occasion $\t$ that the security $(t,S(t))$ leaves a fixed smooth domain $U\subset\R\times\R^I_+$. Moreover, such a strategy returns at any time $t<T$ an excess stream of money equal to $$\int_0^t\frac12\lf\sum_{i,j\in I}(\tilde A_{i,j}(u,S(u))-A_{i,j}(u,S(u))\partial_{s_i s_j}f\rg(u,S(u))\mathrm{d} u$$ and at time $T$ it holds exactly $F(T,S(T))$. This is an application of the pathwise \ito\ formula proven by F\"ollmer and of the PDE theory, which guarantees that under appropriate conditions on $K$ the Pucci-maximal equation $$\bea{l}\sup_{a\in O(\l,\L,K(u,S(u)))}\lf\frac12\sum_{i,j\in I}a_{i,j}\partial_{s_i s_j}f\rg(u,s)+\partial_{u}f(u,s)=0,\quad(u,s)\in U,\\ f(u,s)=F(u,s),\quad (u,s)\in\partial_pU \end{array}$$ has a smooth solution $f$ which is also the solution of the linear equation $$\lf\frac12\sum_{i,j\in I}\tilde A_{i,j}\partial_{s_i s_j}f\rg(u,s)+\partial_{u}f(u,s)=0,\quad\tilde A_{i,j}\in O(\l,\L,K(u,s)).$$
In 1996, \citet{bergman} established the properties of European option prices as functions of the model parameters in case the underlying asset follows a one-dimensional diffusion or belongs to a certain restricted class of multi-dimensional diffusions, or stochastic volatility models, by using PDE methods. Their results have implications in the robustness analysis of pricing and hedging derivatives. They assume absence of arbitrage opportunities and that the following stochastic differential equations are well-defined in terms of path-by-path uniqueness of solutions and that parameters allow for the application of the Feynman-Kac theorem. In the one-dimensional case, they assume that the risk-neutral dynamics of the underlying asset process $S$ is \begin{equation}
\label{eq:1dim}
\mathrm{d} S(t)=S(t)r(t)\mathrm{d} t+S(t)\s(t,S(t))\mathrm{d} W(t), \end{equation} where $W$ is a standard Brownian motion. This holds the \textit{no-crossing} property, i.e. \begin{equation}
\label{eq:nocross} s_2\geq s_1\;\Rightarrow\;S^{t,s_2}(u)\geq S^{t,s_1}(u),\;\text{almost surely}, \forall u\geq t, \end{equation} where $S^{s,t}$ solves \eq{1dim} with $S^{s,t}(t)=s$. Indeed, fixed a realization $W(\cdot,\w)$ of the Brownian motion in \eq{1dim} and the correspondent paths $S^{t,s_2}(\cdot,\w)$ and $S^{t,s_1}(\cdot,\w)$, if there exists a time $\bar s\geq t$ such that $S^{t,s_2}(\bar s,\w)=S^{t,s_1}(\bar s,\w)$, then the two paths will coincide from $\bar s$ onwards, by the Markov property. This property allows a claim price to inherit monotonicity from the payoff. In the two-dimensional case, they assume that the risk-neutral dynamics is given by \begin{equation}
\label{eq:2dim} \left\{\bea{ll}
\mathrm{d} S(t)={}&S(t)r(t)\mathrm{d} t+S(t)\s(t,S(t),Y(t))\mathrm{d} W^1(t), \\
\mathrm{d} Y(t)={}&(\b(t,S(t),Y(t))-\l(t,S(t),Y(t)))\th(t,S(t),Y(t))\mathrm{d} t \\
&+\th(t,S(t),Y(t))\mathrm{d} W^2(t), \end{array}\right. \end{equation} where $W^1,W^2$ are standard Brownian motions with quadratic co-variation $[W^1,W^2](t)=\rho(t,S(t),Y(t))\mathrm{d} t$. Despite the fact that, unfortunately, multi-dimensional diffusions do not exhibit in general a similar behavior, there are conditions under which the process $S$ solving \eq{2dim} holds the no-crossing property \eq{nocross} as well. A first important result concerns the inheritance of monotonicity from option prices and establishes bounds on the risky position of a delta-hedging portfolio. \begin{theorem}[Theorem 1 in \cite{bergman}] Let the payoff function $g$ be one-sided differentiable and at each point $x$ we also allow either $g'(x-)=\pm\infty$ or $g'(x+)=\pm\infty$. Suppose that $S$ follows either the one-dimensional diffusion~\eq{1dim}, or the two-dimensional diffusion~\eq{2dim} with the additional property that the drift and diffusion parameters do not depend on $s$. Then $$\inf_x (\min\{g'(x-),g'(x+)\})\leq \partial_{s}v\leq\sup_x(\min\{g'(x-),g'(x+)\}),$$ uniformly in $s,t$, where $v$ is the value of the European claim with payoff $g$. \end{theorem} This follows directly by the no-crossing property and an application of the generalized intermediate value theorem of real analysis. A second important result proves the inheritance of convexity of the claim price from the payoff function, which was already known for proportional one-dimensional diffusions (Black-Scholes setting). \begin{theorem}[Theorem 2 in \cite{bergman}]\label{th:bergman2}
Suppose that $S$ follows either the one-dimensional diffusion~\eq{1dim}, or the two-dimensional diffusion~\eq{2dim} with the additional property that the drift and diffusion parameters do not depend on $s$ and there exists a function $G:[0,\infty)^2\rightarrow\R$ such that $$G(t,y)=\s(t,s,y)\th(t,s,y)\rho(t,s,y).$$ Then, if the payoff function is convex (concave), the calms value is a convex (concave) function of the current underlying price. \end{theorem} The proof proceeds by applying the Feynman-Kac theorem to write the claim value as the solution of a Cauchy problem with final datum given by the payoff function $g$; then, by taking the $s$-partial derivative of the PDE, we get a new Cauchy problem for $\partial_{s}v$ with final datum $g'$. It suffices to apply again the Feynman-Kac theorem, taking into account the hypothesis on coefficients, to write $\partial_{s}v$ as an expectation of $g'$ composed to a new stochastic process which holds the no-crossing property. Finally, the no-crossing property gives the monotonicity of $\partial_{s}v$ and equivalently the convexity (concavity) of $v$ in the underlying asset price. A consequence of the previous results in terms of robustness analysis of hedging strategies is the extension of the comparative statics known in a Black-Scholes setting to a one-dimensional diffusion. In particular, an ordering in the volatility functions is preserved in the claim value functions: \begin{theorem}[Theorem 6 in \cite{bergman}]
Let $\s_1(t,s)\geq\s_2(t,s)$ for all $s,t$ and strict inequality holds in some region, then $v_1(t,s)\geq v_2(t,s)$ for all $s,t$. \end{theorem} This result turns out to be of special interest if one has knowledge of deterministic bounds on the volatility and the claim to hedge is a plain vanilla option, e.g. a call option; in such a case it implies to have both the call option and its Delta bounded respectively by the correspondent Black-Scholes call prices and appropriate Black-Scholes Deltas. \begin{theorem}[Theorem 8 in \cite{bergman}] If for all $s,t$, $\ushort\s(t)\leq\s(t,s)\leq\bar\s(t)$, then, for all $s,t$, $$\bea{cc}c^{\mathrm{BS}(\ushort\s)}(t,s)\leq c(t,s)\leq c^{\mathrm{BS}(\bar\s)}(t,s),\\\partial_{s}c^{\mathrm{BS}(\bar\s)}(t,s'')\leq \partial_{s}c(t,s)\leq \partial_{s}c^{\mathrm{BS}(\bar\s)}(t,s'), \end{array}$$ where $s',s''$ solve respectively $$\bea{l}c^{\mathrm{BS}(\ushort\s)}(t,s)=c^{\mathrm{BS}(\bar\s)}(t,s'')+\partial_{s}c^{\mathrm{BS}(\bar\s)}(t,s'')(s-s''),\\c^{\mathrm{BS}(\ushort\s)}(t,s)=c^{\mathrm{BS}(\bar\s)}(t,s')-\partial_{s}c^{\mathrm{BS}(\bar\s)}(t,s')(s'-s). \end{array}$$ \end{theorem} The bounds on the delta are an immediate consequence of bounds on the call price and of inherited convexity. When the values of $s$ and $c(t,s)$ are observed, these bounds can even be tightened. Finally, they remark that relaxing either the continuity or the Markov property in the one-dimensional case, or the restrictions on the two-dimensional diffusion, the no-crossing property does not need to hold, hence call option prices may exhibit unexpected behaviors.
In 1998, \citet{elkaroui} derived results analogous to \citet{bergman} for both European and American options under a one-dimensional diffusion setting, by an independent approach based on stochastic flows rather than PDEs. While completeness is not assumed, the market is equipped with the strongest form of no-arbitrage condition, namely discounted stock prices are martingales under the objective probability measure $\PP$. The stock price is assumed to follow \begin{equation}\label{eq:elk-dS}
\mathrm{d} S(t)=r(t)S(t)\mathrm{d} t+\s(t)S(t)\mathrm{d} W(t), \end{equation} where $W$ is a standard $(\Ft,\PP)$-Brownian motion, the interest rate $r$ is a deterministic function in $L^1([0,T],\mathrm{d} t)$ and the volatility process $\s$ is non-negative, $\Ft$-adapted, almost surely in $L^1([0,T],\mathrm{d} t)$ and such that the discounted stock price $$\frac{S(t)}{M(t)}=S(0)\exp\left(\int_0^t\s(u)\mathrm{d} W(u)-\frac12\int_0^t\s^2(u)\mathrm{d} u\right),\quad 0\leq t\leq T,$$ is a square-integrable martingale. A trading strategy, or \emph{portfolio process}, is defined as a bounded adapted process, while a \emph{payoff function} is defined as a convex function on $\R_+$ having bounded one-sided derivatives. Let $h$ be the payoff function of a European contingent claim, $\phi$ a portfolio process and $P$ an adapted process such that $P(T)=h(S(T))$ (called a \emph{price process}), the \emph{tracking error} associated with $(P,\phi)$ is defined as $e:=V-P$, where $V$ is the value process of the self-financing portfolio with trading strategy $\phi$ and initial investment $V(0)=P(0)$. Then, $(P,\phi)$ is called a \begin{itemize} \item \emph{replicating strategy} if $\frac e M\equiv0$, in which case the hedger exactly replicates the option at maturity, i.e. $V(T)=h(S(T))$, and $P(0)=\EE^\PP\left[\frac{h(S(T))}{M(T)}\right]$ is an arbitrage price for the claim; \item\emph{super-strategy} if $\frac e M$ is non-decreasing, in which case the hedger super-replicates a short position in the claim at maturity, i.e. $V(T)\geq h(S(T))$, and $P(0)\geq\EE^\PP\left[\frac{h(S(T))}{M(T)}\right]$; \item \emph{sub-strategy} if $\frac e M$ is non-increasing, hence the hedger super-replicates a long position in the claim and the above inequalities are reversed. \end{itemize} The main purpose in \cite{elkaroui} is to analyze the performance of a hedging portfolio derived from a model with mis-specified volatility. First, assuming completeness, they provide two counterexamples of the familiar properties of option prices, when volatility is allowed to be stochastic in a path-dependent manner. On the one hand, a volatility process depending on the initial stock price and the driving Brownian motion may cause the value of a European call to fail the monotonicity property, even if the volatility is non-decreasing in the initial stock price, as it happens for \begin{equation}\label{eq:counterex1}
\s(t)=\ind_{\{W(t)<S(0)\}}\ind_{\{t\leq T_a\}},\quad a>0,\quad T_a:=\inf\{t\geq0,W(t)=a\}. \end{equation}
On the other hand, even when the underlying dynamics allows the claim value to preserve both monotonicity and convexity, it may happen that an ordering on volatilities is not passed on to the respective call values, e.g. \begin{equation}\label{eq:counterex2}
\s(t)\leq\hat\s(t):=\ind_{\{t\leq T_a\}}\quad\text{but}\quad v(x)>\hat v(x)=0\;\forall x\in(0,a). \end{equation} Given a mis-specified model \begin{equation}\label{eq:misS}
\mathrm{d} S_\g(t)=S_\g(t) r(t)\mathrm{d} t+S_\g(t)\g(t,S_\g(t))\mathrm{d} W(t), \end{equation} where the only source of randomness in the volatility is the dependence on the current stock price, the following theorem states the important property of propagation of convexity, also obtained by \citet{bergman}, for one-dimensional diffusions, but the proof follows a completely independent approach. \begin{theorem}[Theorem 5.2 in \cite{elkaroui}]\label{th:elkaroui1}
Suppose that $\g:[0,T]\times\R_+\rightarrow\R$ is continuous and bounded from above and $s\mapsto\partial_{s}(s\g(t,s))$ is Lipschitz-continuous and bounded in $\R_+$, uniformly in $t\in[0,T]$. Then, if $h$ is a payoff function, the mis-specified claim value $$v_\g(x)=\EE^\PP\left[h(S_\g(T))|S_\g(0)=x\right]$$ is a convex function of $x>0$. \end{theorem} Indeed, by denoting $S_\g^x$ the solution of \eq{misS} with initial value $S_\g^x(0)=x$ and by applying the \ito\ formula to the process $\partial_{x}S_\g^x$, the discounted process $\z^x=\lf\frac{\partial_ xS_\g^x(t)}{M(t)}\rg_{t\in[0,T]}$ turns out to be the exponential martingale of $(N(t))_{t\in[0,T]}$, $N(t)=\int_0^t\partial_{s}(S_\g^x(u)\g(u,S_\g^x(u)))\mathrm{d} W(u)$, i.e. $\z^x(t)=\linebreak[0]\exp\big\{N(t)-\frac12\pqv{N}(t)\big\}.$
Then, Girsanov's theorem says that the process $W^x$, defined by $W^x(t)=W(t)-\int_0^t\partial_{s}(S_\g^x(u)\g(u,S_\g^x(u)))\mathrm{d} u$, is a $\PP^x$-Brownian motion, where $\frac{\mathrm{d}\PP^x}{\mathrm{d}\PP}=\z(T)$. The idea now is to prove that $v$ has increasing one-sided derivatives. In order to do that, the first step is to bound the incremental ratios $\frac{v_\g(y)-v_\g(x)}{y-x}$, for $y>x$, in such a way to be able to apply on both sides a version of Fatou's lemma. This gives \begin{align*}\EE^{\PP^x}\left[h'(S_\g^x(T)+)\right]\leq{}&\liminf_{y\searrow x}\frac{v_\g(y)-v_\g(x)}{y-x}\\ \leq{}&\limsup_{y\searrow x}\frac{v_\g(y)-v_\g(x)}{y-x}\leq\EE^{\PP^x}\left[h'(S_\g^x(T)+)\right], \end{align*} and an analogous estimate holds for $y<x$, $y\nearrow x$, thus $$v_\g'(x\pm)=\EE^{\PP^x}\left[h'(S_\g^x(T)\pm)\right].$$ Let us notice that, to achieve the above bounds, it is used the same no-crossing property \eq{nocross} which is fundamental in \cite{bergman}. Lastly, to remove the dependence on $x$ of the expectation operators, they define a new process $\tilde S^x$, whose law under $\PP$ is the same as the law of $S_\g^x$ under $\PP^x$ and which still holds the no-crossing property, hence rewrite $v_\g'(x\pm)=\EE^\PP\left[h'(\tilde S^x(T)\pm)\right]$. From the last argument it also follows that the one-sided derivatives of $v$ have the same bounds as $h$. Under additional requirements, \citet{elkaroui} proved a robustness principle similar to Theorem~\ref{th:bergman2} but also providing the explicit formula of the tracking error, which is fundamental to monitor hedging risks. \begin{theorem}\label{th:elkaroui2}
Under the assumptions of Theorem~\ref{th:elkaroui1}, let $r,\g$ be H\"older-continuous in their arguments. Then, if
\begin{equation}\label{eq:vol-dom}
\s(t)\leq\g(t,S(t))\text{ for Lebesgue-almost all }t\in[0,T], \;\PP-a.s.,
\end{equation} then $(P_\g,\De_\g)$ is a super-strategy, where $P_\g(t):=v_\g(t,S(t))$ and $\De_\g(t):=\partial_{s}v_\g(t,S(t))$ for all $t\in[0,T]$. If the volatilities satisfy the reversed inequality in \eq{vol-dom}, then $(P_\g,\De_\g)$ is a sub-strategy. Moreover, the tracking error associated with $(V_\De,P_\g)$ is \begin{equation}\label{eq:disc-e}
e_\g(t)=M(t)\frac12\int_0^t\lf\g^2(u,S(u))-\s^2(u)\rg S^2(u)\partial^2_{xx}v_\g(u,S(u))\frac{\mathrm{d} u}{M(u)}. \end{equation}
\end{theorem} Indeed, under the assumptions, the value function $v_\g$ defined by $$v_\g(t,x):=\EE\left[e^{-\int_t^Tr(u)\mathrm{d} u}h(S_\g^{t,x}(T))\right],\quad t\in[0,T],\;x>0,$$ where $S_\g^{t,x,}$ is the solution of \eq{misS} with initial condition $S_\g^{t,x}(t)=x$, belongs to $\C^{1,2}([0,T)\times\R_+)\cap\C([0,T]\times\R_+)$ and satisfies the partial differential equation $L_\g v_\g=0$ on $[0,T)\times\R_+$, with the operator defined by \begin{equation}\label{eq:Lgamma}
L_\g f(t,x):=\partial_{t}f(t,x)+r(t)x\partial_{x}f(t,x)+\frac12\g^2(t,x)x^2\partial^2_{xx}f(t,x)-r(t)f(t,x). \end{equation}
Then, the value $V_{\De}$ of the self-financing portfolio $\De_\g$ will evolve according to $$\mathrm{d} V_\De(t)=r(t)V_\De(t)\mathrm{d} t+\De_\g(t)(\mathrm{d} S(t)-r(t)S(t)\mathrm{d} t),$$ whereas the price process is governed by \begin{eqnarray*} \mathrm{d} P_\g(t)&={}&r(t)P_\g(t)\mathrm{d} t+\De_\g(t)(\mathrm{d} S(t)-r(t)S(t)\mathrm{d} t)\\
&&+\frac12\lf\s^2(t)-\g^2(t,S(t))\rg S^2(t)\partial^2_{xx}v_\g(t,S(t))\mathrm{d} t. \end{eqnarray*} Finally, the convexity of $v_\g$ and the domination of the mis-specified volatility over the \lq true\rq\ one end the proof. Important remarks about weakening the assumption \eq{vol-dom} are reported in the appendix of \cite{elkaroui}. By the way, under the regularity requirements, equation \eq{disc-e} for the discounted tracking error is still true, independently of the domination of volatilities. If $\s,\g$ are both non-negative, square-integrable and deterministic functions of time, satisfying \begin{equation}
\label{eq:weak-dom}
\lf\int_t^T\s^2(u)\mathrm{d} u \rg^{\frac12}\leq\lf\int_t^T\g^2(u)\mathrm{d} u \rg^{\frac12},\quad\text{for all }0\leq t\leq T, \end{equation} then the mis-specified value of the claim succeeds to dominate the true price, but the mis-specified delta-hedging portfolio is not guaranteed to replicate the option at maturity, in the sense that the expected tracking error under the market probability measure can be negative.
In 1998, \citet{hobson} also addressed the monotonicity and super-replication properties of options prices under mis-specified models. The theorems presented in \cite{hobson} are similar to the results found in \cite{bergman} and \cite{elkaroui}, but the author uses a further different approach, based on coupling techniques.
The setting is that of a continuous-time frictionless market with finite horizon $T$, where the interest rate is set to $r=0$ and the stock price process $S$ is a weak solution to the stochastic differential equation \begin{equation}\label{eq:hob-dS}
\mathrm{d} S(t)=S(t)\s(t)\mathrm{d} B(t),\quad S(0)=s_0, \end{equation} for some standard Brownian motion $B$ on a stochastic basis $(\O,\F,\PP)$ and an adapted volatility process $\s$. For the moment, completeness of the model is assumed, so that options prices are given by $\PP$-expectations of the respective claims at maturity. The first main theorem goes under the name of ``option price monotonicity''. \begin{theorem}\label{th:hob-mono}
Let $h$ be a convex function and consider two candidate models for \eq{hob-dS}, namely $\s(\cdot)=\tilde\s(\cdot,S(\cdot))$ or $\s(\cdot)=\hat\s(\cdot,S(\cdot))$, such that $\hat\s(t,s)\geq\tilde\s(t,s)$ for all $t\in[0,T]$, $s\in\R$. Then, the European option with payoff $h(S(T))$ has a higher value under the model with volatility $\hat\s$ than under the one with volatility $\tilde\s$. \end{theorem} The proof is based on the joint application on the Brownian representation of local martingales and a coupling argument. Precisely, fixed a Brownian motion $W$ issued of $s_0$, define, for each model, a strictly increasing process $\t$ as the solution, for almost all $\w\in\O$, of the ordinary differential equation $$\frac{\mathrm{d} \t(t;\w))}{\mathrm{d} t}=\frac1{W^2(t;\w)\s^2(\t(t;\w),W(t;\w))},\quad t\in[0,T].$$ Then, define $A(\cdot;\w)$ as the inverse of $\t(\cdot;\w)$ and consider the process $P=W(A)$ (again one for each model). This is a local martingale whose quadratic variation has time-derivative given by $$\partial_t A(t)=W^2(A(t))\s^2(\t(A(t)),W(A(t)))=P^2(t)\s^2(t,P(t))$$ Thus, $P$ is a weak solution to the SDE $\mathrm{d} P(t)=P(t)\s(t,P(t))\mathrm{d} B$ for some Brownian motion $B$. By this representation, $\hat A\geq\tilde A$ on [0,T], almost surely. Indeed, at time 0, $\hat P(0)=\tilde P(0)=s_0$ and $\hat A(0)=\tilde A(0)=0$; afterward, if $\hat P(t)=\tilde P(t)$ then $\mathrm{d}\hat A(t)\geq\mathrm{d}\tilde A(t)$ and if $\hat A(t)=\tilde A(t)$ then $\hat P(t)=\tilde P(t)$. Finally, by Jensen's inequality and properties of the Brownian motion, \begin{eqnarray*}
\EE[h(\hat P(T))]&={}&\EE\left[\Et{h(\hat P(T))}{\tilde A(T)}\right]\\ &\geq&\EE\left[h\lf\Et{\hat P(T)}{\tilde A(T)}\rg\right]\\ &={}&\EE\left[h\lf\Et{W(\tilde A(T))+(W(\hat A(T))-W(\tilde A(T)))}{\tilde A(T)}\rg\right]\\ &={}&\EE[h(\tilde P(T))]. \end{eqnarray*} Notice that Hobson's method allows to generalize the statement of the theorem in two directions: \begin{itemize} \item it does not require the completeness assumption, which is used only in the last step of proof, when pricing the European claim by taking the expectation under the risk-neutral probability $\PP$, and can be omitted provided an agreed pricing measure; \item it has not to restrict to diffusion models, as the same construction applies also to the case of path-dependent volatility $\s(t)=\s(t,S_t)$, provided that $\t$ and its inverse can be defined and by assuming that, for all $t\in[0,T],s\in\R$, \begin{equation}\label{eq:path-dom} \hat\s(t,\hat s_t)\geq\tilde\s(t,\tilde s_t)\quad\forall \hat s_t,\tilde s_t\in \{\{f(u\wedge t)\}_{u\in[0,T]},\; f(0)=s_0,\; f(t)=s\} \end{equation} The contradiction that seems to arise with the counterexample \eq{counterex2} in \cite{elkaroui} is not consistent here. In fact, in \cite{elkaroui} the price process is defined to be the strong solution of the SDE \eq{elk-dS}, so that the coupling argument could not be applied, while in \cite{hobson} it is instead a weak solution. In effect, what matters to the aim of derivative pricing and hedging is the law of the price process, rather than its relation with a specific Brownian motion. \end{itemize}
The second property of option prices addressed by \citeauthor{hobson} is the preservation of convexity from the payoff to the value function. This is then used to derive the so-called \lq super-replication property\rq. \begin{theorem}\label{th:hob-conv}
Suppose the asset price follows the complete diffusion model \eq{hob-dS} where the volatility function has sufficient regularity to ensure that the solution is unique-in-law {\upshape(e.g. $s\mapsto s\s(t,s)$ Lipschitz)} and a true martingale {\upshape(e.g. $\s$ bounded)}. If $h$ is a convex payoff function, then the claim value at each time prior to maturity is convex in the current underlying price. \end{theorem} The coupling argument used here is the following. Take $0<z<y<x$ and define $X,Y,Z$ as the solutions to \eq{hob-dS} with respect to independent Brownian motions and starting point respectively $x,y,z$ at time 0. Denote the crossing times with $H_X:=\inf\{t\geq0,X(t)=Y(t)\}$ and $H_Y:=\inf\{t\geq0,Y(t)=Z(t)\}$, and $\t:=H_X\wedge H_Y\wedge T$. Conditionally on $\{\t=H_X\}$ (respectively on $\t=H_Y$), $X(T)\stackrel d= Y(T)$ (respectively $Y(T)\stackrel d= Z(T)$), while on $\{\t=T\}$ we have $Z(T)<Y(T)<X(T)$. Thus, by using the identities in law and the convexity of $h$, \begin{align*} \EE[(X(T)-Z(T))h(Y(T))]\leq{}&\EE[(Y(T)-Z(T))h(X(T))]\\ &{}+\EE[(X(T)-Y(T))h(Z(T))]. \end{align*} Then, the independence of the driving Brownian motions gives $$(x-z)\EE[h(Y(T))]\leq(x-y)\EE[h(Z(T))]+(y-z)\EE[h(X(T))],$$ that is the convexity of the option price, by arbitrariness of starting points.
It should be noticed that this proof cannot be extended to non-diffusion models, where the identities in law could not be used.
The same property is also proved in \cite{bergman} and \cite{elkaroui}, however both require more restrictive conditions, such as the differentiability of the diffusion coefficient $s^2\s^2(t,s)$ and a bounded (possibly one-sided) derivative for $h$. In case $h$ has a derivative bounded by a constant $C$ on $[0,\infty)$, then bounds on the option price and its spatial derivative at any time $t\in[0,T]$ are a direct consequence:
$$h(0)-CS(t)\leq v(t,S(t))\leq h(0)+CS(t),\quad \left|\partial_{s}v(t,S(t))\right|\leq C.$$
In \cite{elkaroui} the property of inherited convexity is used to prove robustness of a delta-hedging portfolio, accordingly to their definition. \citeauthor{hobson} reproduces the same steps to prove the \lq super-replication property\rq, stated as follows. \begin{theorem}\label{th:hob-super}
Under the model assumption of Theorem~\ref{th:hob-conv}, assume also that option prices from the model are of class $\C^{1,2}([0,T]\times\R)$ (e.g. $\s>0$ and H\"older continuous). If the model volatility $\s$ dominates the true volatility $\hat\s$, i.e. $\s(t,s)\geq\hat\s(t,s)$ for all $t\in[0,T]$, $s\in\R$, and if the payoff function is convex, then pricing and hedging according to the model will super-replicate the option payout. \end{theorem}
In order to prove that the model price dominates the true price, the portfolio value process, in particular the stochastic integral $\int_0^\cdot\partial_{s}v(u,S(u))\mathrm{d} S(u)$, has to be a martingale. In case of a payoff function with bounded derivative, this is achieved by assuming that $\EE\left[\lf\int_0^T S^2(u)\s^2(u,S(u))\mathrm{d} u\rg^{\frac12}\right]<\infty$, which makes $S$ itself a true martingale, even if not necessarily square-integrable.
\subsection{Robust hedging of discretely monitored options} \label{sec:ss}
More recently, \citet{ss} revisited the notion of robustness by considering the performance of a model-based hedging strategy when applied to the realized observed path of the underlying asset price, rather than to some supposedly \lq true\rq\ model, inspired by the \follmer's pathwise \ito\ calculus. \citet{ss} studied the performance of delta hedging strategies for a path-dependent discretely monitored derivative, obtained under a local volatility model.
The stock price process $S$ is assumed to follow a local volatility model where the volatility process is a deterministic function of time and the current stock price, \begin{equation}\label{eq:ss-dS}
\mathrm{d} S(t)=S(t)\s(t,S(t))\mathrm{d} W(t), \end{equation} where the local volatility function is assumed to satisfy the following regularity conditions. \begin{assumption}\label{ass:ss}
\begin{itemize}\item[]
\item $\s\in\C^1([0,T]\times\R_+,\R_+)$, bounded above and below away from 0;
\item $s\mapsto s\s(t,s)$ Lipschitz continuous, uniformly in $t\in[0,T]$.
\end{itemize} \end{assumption}
The derivatives considered here have a path-dependent claim of the form $H(S)=h(S(t_1),\ldots,S(t_n))$, where $0=t_0<t_1<\ldots t_n\leq T$ and $h:[0,\infty)^n\rightarrow[0,\infty)$ is continuous and satisfies $h(x)\leq C(1+|x|^p)$ for all $x\in[0,\infty)^n$ and certain $C,p\geq0$, in which case $h$ is referred to as a \emph{payoff function}.
Using the Markov property, the price at time $t\in[t_k,t_{k+1})$ is given by {\setlength\arraycolsep{2pt} \begin{eqnarray} \nonumber v(t,s_1,\ldots,s_k,s)&=&\EE[H(S)\mid S(t_1)=s_1,\ldots,S(t_k)=s_k,S(t)=s]\\ \label{eq:ss-v} &=&\EE[h(s_1,\ldots,s_k,S(t_{k+1}),\ldots,S(t_n))\mid S(t)=s] \end{eqnarray}} We denote $\displaystyle v(t,x):=\sum_{k=1}^n\ind_{[t_k,t_{k+1})}(t) v(t,s_1,\ldots,s_k,s)$, where $x\in\C([0,T],\R_+)$ is a deterministic function matching the observed stock price path, i.e. $x(t_1)=s_1,\ldots,x(t_k)=s_k,x(t)=s$. It is also assumed that all observed price paths are continuous and have finite quadratic variation along a fixed sequence of time partitions $\{\pi^n\}_{n\geq1}$, $\pi^n=(t_i^n)_{i=0,\ldots,m(n)}$, $0=t_0^n<\ldots<t_{m(n)}^n=T$ for all $n\geq1$, with mesh going to 0. The following result shows the regularity of the value function and makes use of the \follmer's pathwise calculus presented in \chap{pfc}. \begin{proposition}\label{prop:hob-pde}
Let $h$ be a payoff function. Under Assumption~\ref{ass:ss}, the map $(t,s)\mapsto v(t,x)$ belongs to $\displaystyle \C^{1,2}\Big({\setlength{\extrarowheight}{-0.7cm}\bea{c}\scriptstyle n-1\\\cup\\\scriptstyle k=0 \end{array}} (t_k,t_{k+1})\times[0,\infty)\Big)\cap\C([0,T]\times[0,\infty))$ and satisfies the partial differential equation
\begin{equation}\label{eq:ss-pde}
\partial_{t}v(t,x)+\frac12\s^2(t,s)s^2\partial_{ss}v(t,x)=0,\quad t\in{\setlength{\extrarowheight}{-0.7cm}\bea{c}\scriptstyle n-1\\\cup\\\scriptstyle k=0 \end{array}}(t_k,t_{k+1}),\,s\in[0,\infty).
\end{equation} Furthermore, the \follmer\ integral $\int_0^T\partial_{s}v(t,x)\mathrm{d} x(t)$ is well defined and the pathwise \ito\ formula holds: $$v(T,x)=v(0,x)+\int_0^T\partial_{s}v(t,x)\mathrm{d} x(t)+\frac12\int_0^T\partial_{ss}v(t,x)\mathrm{d}\pqv{x}(t)+\int_0^T\partial_{t}v(t,x)\mathrm{d} t.$$ \end{proposition} The regularity and the PDE characterization of the value function are proven by backward induction and using the following standard result for a European non-path-dependent option with payoff $h:[0,\infty)\rightarrow\R_+ $, that is: let $v(t,s):=\EE[h(S(T))\mid S(t)=s]$, then $v\in\C^{1,2}([0,T]\times(0,\infty))\cap\C([0,T]\times[0,\infty))$, satisfies a polynomial growth condition in $s$ uniformly in $t\in[0,T]$ and solves the Cauchy problem \eq{ss-pde} on $[0,T]\times(0,\infty)$. So, at step 1, let $t\in[t_{n-1},t_n)$, the problem reduces to the standard case. Then, at each step $k>1$, let $t\in[t_{n-k},t_{n-k+1})$, define the auxiliary function $$h_k(s)=\EE[h(s_1,\ldots,s_{n-k},s,S(t_{n-k+2}),\ldots,S(t_n))\mid S(t_{n-k+1})=s],$$ which is a payoff function such that $v(t,s_1,\ldots,s_{n-k},s)=\EE[h_k(S(t_{n-k+1}))\mid S(t)=s]$ and again the standard result applies.
Using the same notation above for $x$ and $H$, \citeauthor{ss} defined the delta-hedging strategy for $H$ obtained from the model~\eq{ss-dS} to be \textit{robust} if, when the model volatility \textit{overestimates} the market volatility, i.e. $\int_r^t\s^2(u,x(u))x^2(u)\mathrm{d} u\geq \pqv{x}(t)-\pqv{x}(r)$ for all $0\leq r<t\leq T$, or equivalently $\s(t,x(t))\geq\sqrt{\z(t)}$, where $\pqv{x}(t)=\int_0^t\z(u)x^2(u)\mathrm{d} u$ and $\z\geq0$, for Lebesgue-almost every $t\in[0,T]$, then
\begin{equation}
\label{eq:super}
v(0,x)+\int_{0}^T\partial_{s}v(u,x) \mathrm{d}^\Pi x(u) \geq H(x).
\end{equation}
They pointed out that, under the assumptions of Proposition~\ref{prop:hob-pde}, the positivity of the option Gamma leads to a robust delta-hedging strategy. An application of this first basic result is the generalized Black-Scholes model, where the value function of any convex payoff function is again convex and hence the corresponding delta hedge is robust. This follows directly from the fact that a geometric Brownian motion with time-dependent volatility is affine in its starting point and convexity is invariant under affine transformations.
However, in a general local volatility model, convexity of a payoff function does not guarantee the robustness property. Indeed, the main theorem in \cite{ss} spots sufficient conditions on the payoff function resulting in convexity for the value function and consequent robustness for the delta hedge. \begin{theorem}
If the payoff function $h$ is \emph{directionally convex}, i.e. for all $i=1,\ldots,n$ the map $x_i\mapsto h(x_1,\ldots,x_i,\ldots,x_n)$ is convex and has increasing right-derivative with respect to any other component $j=1,\ldots,n$, then, for all $k=1,\ldots,n$ and for any $t\in[t_k,t_{k+1})$, the value function $(s_1,\ldots,s_k,s)\mapsto v(t,s_1,\ldots,s_k,s)$ is also directionally convex and hence convex in the last variable, and the delta-hedging strategy is robust. \end{theorem} The crucial step in the proof of the above theorem is the inherited directional convexity of a map of the form $$u(s_1,\ldots,s_n)=\EE[h(s_1,\ldots,s_{n-1},S(T))\mid S(t)=s_n],$$ which is proven by means of the notion of Wright convexity. Furthermore, given a directionally convex function of $k+1$ arguments $u(s_1,\ldots,s_{k+1})$, the contraction $\tilde u(s_1,\ldots,s_{k})=u(s_1,\ldots,s_k,s_k)$ is also directionally convex. By this remark, the proof ends by induction on $k=0,\ldots,n$, noticing that for $t\in[t_{n-k},t_{n-k+1})$ the value function can be written as $$v(t,s_1,\ldots,s_{n-k},s)=\EE[v(t_{n-k+1}s_1,\ldots,s_{n-k},S(t_{n-k+1}),S(t_{n-k+1}))\mid S(t)=s].$$
A counter-example consisting of a local volatility model where the delta hedge fails to be robust in case of any convex payoff which is not identically linear and is positively homogeneous, implies that every payoff function that is both positively homogeneous and directionally convex must be linear.
The results obtained in \cite{ss} in the context of robustness of hedging strategies are specific to one-dimensional local volatility models. In more general models, the issue of propagation of convexity is quite intricate: in multivariate local volatility models, the convexity of prices of European options depends on the volatility matrix and value functions of European call options may fail to be convex.
\section{Robustness and the hedging error formula} \label{sec:path-robust} In this thesis, we consider the following problem: a market participant sells a path-dependent derivative with maturity $T$ and payoff functional $H$ and uses a model of preference to compute the price of such derivative and the corresponding hedging strategy.
This situation is typical of financial institutions issuing derivatives and subject to risk management constraints. The behavior of the underlying asset during the lifetime of the derivative may or may not correspond to a typical trajectory of the model used by the issuer for constructing the hedging strategy. More importantly, the hedger only experiences a single path for the underlying so it is not even clear what it means to assess whether the model correctly describes the risk along this path. The relevant question for the hedger is to assess, ex-post, the performance of the hedging strategy in the realized scenario and to quantify, ex-ante, the magnitude of possible losses across different plausible risk scenarios. This calls for a scenario analysis --or pathwise analysis-- of the performance of such hedging strategies. In fact such scenario analysis, or stress testing, of hedging strategies are routinely performed in financial institutions using simulation methods, but a theoretical framework for such a pathwise analysis was missing.
In the general case where either the payoff or the volatility are path-dependent, the value at time t of the claim will be a non-anticipative functional of the path of the underlying asset.
In this chapter, we keep to the one-dimensional case and we work on the canonical space of continuous paths $(\O,\F,\FF)$, where $\O:=C([0,T],\R_+)$, $\F$ is the Borel sigma-field and $\FF=\Ft$ is the natural filtration of the coordinate process $S$, given by $S(u,\w)=\w(u)$ for all $\w\in\O$, $t\in[0,T]$. The coordinate process $S$ represents the asset price process and we assume that the hedger's model consists in a square-integrable martingale measure for $S$: \begin{assumption}\label{ass:S}
The market participant prices and hedges derivative instruments assuming that the underlying asset price $S$ evolves according to $\mathrm{d} S(t)=\s(t) S(t) \mathrm{d} W(t)$, i.e. \begin{equation}
\label{eq:S}
S(t)=S(0)e^{\int_0^t\s(u)\mathrm{d} W(u)-\frac12\int_0^t\s(u)^2\mathrm{d} u},\,t\in[0,T], \end{equation} where $W$ is a standard Brownian motion on $(\O,\F,\FF,\PP)$ and the volatility $\s$ is a non-negative $\FF$-adapted process such that $S$ is a square-integrable $\PP$-martingale. \end{assumption} This assumption includes the majority of models commonly used for pricing and hedging derivatives. The assumption of square-integrability is not essential and may be removed by localization arguments but we will retain it to simplify some arguments. Note that this is an assumption on the pricing model used by the hedger, not an assumption on the evolution of the underlying asset itself. We will not make any assumption on the process generating the dynamics of the underlying asset.
\begin{assumption}\label{ass:H}
Let $H:D([0,T],\R)\mapsto\R$ be the payoff of a path-dependent derivative with maturity $T$, such that $\EE^{\PP}[|H(S_T)|^2]<\infty$. \end{assumption}
Under Assumptions \ref{ass:S} and \ref{ass:H}, the replicating portfolio for $H$ is given by the delta-hedging strategy $(Y(0),\nabla_SY)$ and its value process coincides with $Y$.
We denote by \begin{equation} \label{eq:suppS}
\supp(S,\PP):=\big\{\w\in\O:\;\PP(S_T\in V)>0\;\forall \text{neighborhood $V$ of }\w\text{ in }\lf\O,\norm{\cdot}_\infty\rg\big\}, \end{equation}
the \emph{topological support of $(S,\PP)$ in $(\O,\norm{\cdot}_\infty)$}, that is the smallest closed set in $(\O,\norm{\cdot}_\infty)$ such that it contains $S_T$ with $\PP$-measure equal to one. Since $S$ may not have full support in $(\O,\norm{\cdot}_\infty)$, we will need to specifically work on the support of $S$ in order to pass from equations that hold $\PP$-almost surely for functionals of the price process $S$ to pathwise equations for functionals defined on the space of stopped paths.
Throughout this chapter, we consider a fixed sequence of partitions $\Pi=(\pi^n)_{n\geq1}$, $\pi^n=\{0=t^n_0<t^n_1<\ldots,t^n_{m(n)}=T\}$, with mesh going to 0 as $n$ goes to $\infty$. For paths of absolutely continuous finite quadratic variation along $\Pi$, we define the \emph{local realized volatility} as $$\s^{\mathrm{mkt}}:[0,T]\times\mathcal A\to\R,\quad (t,\w)\mapsto\s^{\mathrm{mkt}}(t,\w)=\frac1{\w(t)}\sqrt{\frac{\mathrm{d}}{\mathrm{d} t}[\w](t)},$$ where $$\mathcal A:=\{\w\in Q(\O,\Pi),\;t\mapsto[\w](t)\text{ is absolutely continuous}\}.$$
Our main results apply to paths with finite quadratic variation along the given sequence $\Pi$ of partitions, as it is a necessary assumption in the theory of functional pathwise calculus. However, as remarked in Subsection \ref{sec:reasonable}, this assumption is also reasonable in terms of avoiding undesirable strategies that carry infinite gain with bounded initial capital on some paths.
If $Y\in\Cb(S)$, with $Y(t)=F(t,S_t)$ $\mathrm{d} t\times\mathrm{d}\PP$-almost surely, the universal hedging equation \eq{univ-hedge} holds and the asset position of the hedger's portfolio at almost any time $t\in[0,T]$ and for $\PP$-almost all scenarios $\w$, is given by $\nabla_SY(t,\w)=\vd F(t,\w)$. Note that, even if the \naf\ $F:\W_T\mapsto\R$ does depend on the choice of the functional representation $F$ of $Y$ such that $Y(t)=F(t,\w)$ for Lebesgue-almost all $t\in[0,T]$ and $\PP$-almost all $\w$, the process $\nabla_SY(\cdot)=\vd F(\cdot,S_\cdot)$ does not, up to indistinguishable processes. Moreover, if it also satisfies $F\in\CC^{0,0}(\W_T)$, according to \prop{G} the trading strategy $(F(0,\cdot),\vd F)$ is self-financing on $Q(\O,\Pi)$ and allows a path-by-path computation of the gain from trading as a \follmer\ integral.
We will therefore restrict to this class of pathwise trading strategies, which are of main interest: \begin{equation}\label{eq:nabla} \VV:=\{\vd F,\quad F\in\Cloc(\W_T)\cap\CC^{0,0}(\W_T)\}. \end{equation} Note that $\VV$ has a natural structure of vector space; we call its elements \emph{vertical 1-forms}.
In line with \rmk{path-sf}, the portfolio value of a self-financing trading strategy $(V_0,\phi)$ with asset position a vertical 1-form $\phi=\vd F$ and initial investment $V_0=F(0,\cdot)$ will be given by, at any time $t\in[0,T]$ and in any scenario $\w\in Q(\O,\Pi)$, $$V(t,\w)=F(0,\w)+\int_{0}^t\vd F(u,\w)\mathrm{d}^\Pi\w(u).$$ The portfolio value functional $V(T,\cdot)$ at the maturity date can be different from the payoff $H$ with strictly positive $\PP$-measure. What is important about this mis-replication is the sign of the difference between the portfolio value at maturity and the payoff in a given scenario.
By the arguments above and recalling \defin{hedging_error}, we remark that
the hedging error of a trading strategy $(V_0,\phi)$, with $\phi\in\VV$, for a derivative with payoff $H$ and in a scenario $\w\in Q(\O,\Pi)$, is given by $$V(T,\w)-H(\w_T)=V_0(\w)+\int_{0}^T\phi(u,\w)\mathrm{d}^\Pi\w(u)-H(\w_T).$$ Moreover, $(V_0,\phi)$ is a super-strategy for $H$ on $U\subset Q(\O,\Pi)$ if $$V_0(\w)+\int_{0}^T\phi(u,\w) \mathrm{d}^\Pi\w(u) \geq H(\w_T)\quad\forall\w\in U.$$
\begin{definition}\label{def:rob}
Given $F\in\Cloc(\W_T)\cap\CC^{0,0}$ such that $Y(t)=F(t,S_t)$ $\mathrm{d} t\times\mathrm{d}\PP$-almost surely, the delta-hedging strategy $(Y(0),\nabla_S Y)$ for $H$ is said to be \emph{robust} on $U\subset Q(\O,\Pi)$ if $(F(0,\cdot),\vd F)$ is a \emph{super-strategy} for $H$ on $U$. \end{definition}
\begin{proposition}[Pathwise hedging error formula]\label{prop:robust}
If there exists a \naf\ $F:\L_T\to\R$ such that
\begin{align} &F\in\Cb(\W_T)\cap\CC^{0,0}(\W_T),\quad \hd F\in\CC^{0,0}_l(\W_T),&\label{eq:regF}\\
&F(t,S_t)=\EE^{\PP}[H(S_T)|\F_t]\quad \mathrm{d} t\times \mathrm{d}\PP\text{-a.s.} \label{eq:valueF}
\end{align} then, the hedging error of the delta hedge $(F(0,\cdot),\vd F)$ along any path $\w\in Q(\O,\Pi)\cap\supp(S,\PP)$ is explicitly given by
\begin{align*}
&\!\!\!\!\!\!V_0(\w)+\int_{0}^T\vd F(u,\w) \mathrm{d}^\Pi\w(u)-H(\w_T)\\
={}&\frac12\int_{0}^T\s(t,\w)^2\w^2(t)\vd^2F(t,\w)\mathrm{d} t-\frac12\int_{0}^T\vd^2F(t,\w)\mathrm{d}[\w](t). \end{align*} In particular, if $\w\in\A\cap\supp(S,\PP)$, then
\begin{align}
&\!\!\!\!\!\!V_0(\w)+\int_{0}^T\vd F(u,\w) \mathrm{d}^\Pi\w(u)-H(\w_T)\nonumber\\
={}& \frac12\int_{0}^T \lf\s(t,\w)^2-\s^{\mathrm{mkt}}(t,\w)^2\rg\w^2(t)\vd^2F(t,\w) \mathrm{d} t. \label{eq:tr_err}
\end{align} Furthermore, if for all $\w\in U\subset(\A\cap\supp(S,\PP))$ and Lebesgue-almost every $t\in[0,T)$, \begin{equation}\label{eq:supervol} \vd^2F(t,\w)\geq0\text{ (resp. $\leq$),\quad and \quad}\s(t,\w)\geq\s^{\mathrm{mkt}}(t,\w)\text{ (resp.$\leq$)}, \end{equation} then the delta hedge for $H$ is robust on $U$. \end{proposition} \proof Assumptions \eq{regF}-\eq{valueF} imply $Y\in\Cb(S)$, with $Y(t)=F(t,S_t)$ $\mathrm{d} t\times\mathrm{d}\PP$-almost surely, thus $F(\cdot,S_\cdot)$ satisfies the functional \ito\ formula for functionals of continuous semimartingales \eq{fif-csm}.
Moreover, by \prop{universalprice}, the universal pricing equation holds: for all $\w\in\supp(S,\PP)$, \begin{equation}\label{eq:fpde} \hd F(t,\w)+\frac12\vd^2F(t,\w)\s^2(t,\w)\w^2(t)=0\quad \forall t\in[0,T) \end{equation}
By \prop{G} and using the pathwise change of variable formula for functionals of continuous paths (\thm{fif-c}), the value of the hedger's portfolio at maturity is given by, for all $\w\in Q(\O,\Pi)$, \begin{align} V(T,\w)={}&F(0,\w_{0})+\int_{0}^T\vd F(t,\w) \mathrm{d}^\Pi\w(t)\nonumber\\ ={}& H-\int_{0}^T\hd F(t,\w)\mathrm{d} t-\frac12\int_{0}^T\vd^2F(t,\w) \mathrm{d}[\w](t).\label{eq:V} \end{align} Then, using the equations \eq{V} and \eq{fpde}, we get an explicit expression for the hedging error along any path $\w$ in $\A\cap\supp(S,\PP)$ as \begin{align*} V(T,\w)-H ={}& \int_{0}^T\lf\frac12{\s}^2(u,\w)\w^2(u)\vd^2F(u,\w)-\frac12\vd^2F(u,\w) \mathrm{d}[\w](t)\rg \mathrm{d} u \\
&{} -\int_{0}^T\hd F(u,\w)\mathrm{d} t - \int_{0}^T\frac12{\s}(u,\w)^2\w^2(u) \vd^2F(u,\w)\mathrm{d} u \\ ={}& \frac12\int_{0}^T \lf{\s}(u,\w)^2-\s^{\mathrm{mkt}}(t,\w)^2\rg\w^2(u)\vd^2F(u,\w) \mathrm{d} u. \end{align*} Moreover, the inequalities \eq{supervol} imply that, for all $\w\in U$, \begin{align*} V(T,\w)\geq{}& H-\int_{0}^T\hd F(t,\w)\mathrm{d} t-\frac12\int_{0}^T\s(t,\w)^2\w^2(t)\vd^2F(t,\w)\mathrm{d} t\\ ={}&H. \end{align*} This proves the robustness of the delta hedge on $U$. \endproof \begin{remark}
\prop{robust} simply requires the price trajectory to have an absolutely continuous quadratic variation in a pathwise sense, but does not assume any specific probabilistic model. Nevertheless, it applies to any model whose sample paths fulfill these properties almost-surely: this applies in particular to diffusion models and other models based on continuous semimartingales analyzed in \cite{avlevyparas,bergman,elkaroui,hobson}. However, note that we do not even require the price process to be a semimartingale. For example, our results also hold when the price paths are generated by a (functional of a) fractional Brownian motion with index $H\geq\frac12$. \end{remark}
\section{The impact of jumps} \label{sec:jumps}
The presence of jumps in the price trajectory affects the hedging error of the delta-hedging strategy in an unfavorable way.
\begin{proposition}[Impact of jumps on delta hedging]\label{prop:jumps}
If there exists a \naf\ $F:\L_T\to\R$ such that
\begin{align*} &F\in\Cb(\L_T)\cap\CC^{0,0}(\L_T),\quad \vd F\in\CC^{0,0}(\L_T),\quad \hd F\in\CC^{0,0}_l(\W_T)\\
&F(t,S_t)=\EE^{\PP}[H(S_T)|\F_t]\quad \mathrm{d} t\times \mathrm{d}\PP\text{-a.s.}
\end{align*}
then, for any $\w\in Q(D([0,T],\mathbb{R}_+),\Pi)$ such that $[\w]^c$ is absolutely continuous, the hedging error of the delta hedge $(F(0,\cdot),\vd F)$ for $H$ is explicitly given by
\begin{align} \label{eq:tr_err-jumps}
&\frac12\int_{0}^T \lf\s(t,\w)^2-\s^{\mathrm{mkt}}(t,\w)^2\rg\w^2(t)\vd^2F(t,\w) \mathrm{d} t\\ &-\sum_{t\in(0,T]}\lf F(t,\w_t)-F(t,\w_{t-})-\vd F(t,\w_{t-})\cdot\De\w(t)\rg.
\end{align} \end{proposition} \proof We follow the same steps as in the proof of \prop{robust}, with the appropriate modifications. The universal pricing equation holds on the support of $S$, that is, for all $\w\in\supp(S,\PP)$, $$\hd F(t,\w)+\frac12\vd^2F(t,\w)\s^2(t,\w)\w^2(t)=0\text{ for Lebesgue-a.e. }t\in[0,T).$$ By \prop{G-cadlag} and using the pathwise change of variable formula for functionals of \cadlag\ paths (\thm{fif-d}), the value of the hedger's portfolio at maturity in the scenario $\w$ is given by \begin{align} V(T,\w)={}&F(0,\w_{0})+\int_{0}^T\vd F(t,\w) \mathrm{d}^\Pi\w(t)\nonumber\\ ={}& H-\int_{0}^T\hd F(t,\w)\mathrm{d} t-\frac12\int_{0}^T\vd^2F(t,\w) \mathrm{d}[\w]^c(t)\label{eq:V1}\\ &-\sum_{t\in(0,T]}\lf F(t,\w_t)-F(t,\w_{t-})-\vd F(t,\w_{t-})\cdot\De\w(t)\rg.\label{eq:V2} \end{align} Then, using the equations \eq{V1}, \eq{V2} and \eq{fpde}, we get an explicit expression for the hedging error in the scenario $\w$: \begin{align*} V(T,\w)-H ={}& \frac12\int_{0}^T \lf{\s}(u,\w)^2-\s^{\mathrm{mkt}}(u,\w)^2\rg\w^2(u)\vd^2F(u,\w) \mathrm{d} u\\ &-\sum_{t\in(0,T]}\lf F(t,\w_t)-F(t,\w_{t-})-\vd F(t,\w_{t-})\De\w(t)\rg. \end{align*}
\endproof
\begin{remark} Using a Taylor expansion of $e\mapsto F(t,\w_{t-}+e\ind_{[t,T]})$, we can rewrite the hedging error as \begin{align*} V(T,\w)-H ={}& \frac12\int_{0}^T \lf{\s}(u,\w)^2-\s^{\mathrm{mkt}}(u,\w)^2\rg\w^2(u)\vd^2F(u,\w) \mathrm{d} u\\ &{}-\frac12\sum_{t\in(0,T]} \vd^2F(t,\w_{t-}+\x\ind_{[t,T]})\De\w(t)^2, \end{align*} for an appropriate $\xi\in B(0,\abs{\De\w(t)})$. This shows that the exposure to jump risk is quantified by the Gamma of the option computed in a \lq jump scenario\rq, i.e. along a vertical perturbation of the original path. \end{remark}
\section{Regularity of pricing functionals} \label{sec:exist}
\prop{robust} requires some regularity on the pricing functional $F$, which is in general defined as a conditional expectation, therefore it is not obvious to verify such regularities for $F$ on the space of stopped paths. In \prop{exist}, we give sufficient conditions on the payoff functional which lead to a \emph{vertically smooth} pricing functional.
\begin{definition}\label{def:vsmooth}
A functional $h:D([0,T],\R)\mapsto\R$ is said to be \emph{vertically smooth on $U\subset D([0,T],\R)$} if $\forall(t,w)\in[0,T]\times U$ the map \begin{eqnarray*}g^h(\cdot;t,\w):\R&\to&\R, \\ e&\mapsto &h\lf\w+e\ind_{[t,T]}\rg\end{eqnarray*}
is twice continuously differentiable on a neighborhood $V$ and such that there exist $K,c,\b>0$ such that, for all $\w,\w'\in U$, $t,t'\in[0,T]$, $$\abs{\partial_ e g^h(e;t,\w)}+\abs{\partial_{ee}g^h(e;t,\w)}\leq K,\quad e\in V,$$ and \begin{equation}\label{eq:gh-lip} \bea{c} \abs{\partial_e g^h(0;t,\w)-\partial_e g^h(0;t',\w')}+\abs{\partial_e^2 g^h(0;t,\w)-\partial_e^2 g^h(0;t',\w')}\\ \leq c(\norm{\w-\w'}_\infty+\abs{t-t'}^\b). \end{array} \end{equation} \end{definition}
We define, for all $t\in[0,T]$, the concatenation operator $\conc{t}$ as $$\bea{rl} \conc{t}:&D([0,T],\R)\times D([0,T],\R)\rightarrow D([0,T],\R),\\ &(\w,\w')\mapsto\w\underset{t}{\oplus}\w'=\w\ind_{[0,t)}+\w'\ind_{[t,T]}. \end{array}$$ This will appear in the proof of Propositions \ref{prop:exist} and \ref{prop:convex}.
The following result shows how to construct a (vertically) smooth version of the conditional expectation that gives the price of a path-dependent contingent claim. \begin{proposition}\label{prop:exist}
Let $H:(D([0,T],\R),\norm{\cdot}_\infty)\mapsto\R$ a locally-Lipschitz payoff functional such that $\EE^{\PP}[|H(S_T)|]<\infty$ and define $h:(D([0,T],\R)\to\R$ by $h(\w_T)=H(\exp\w_T)$, where $\exp\w_T(t):=e^{\w(t)}$ for all $t\in[0,T]$. If $h$ is vertically smooth on $\C([0,T],\R_+)$ in the sense of \defin{vsmooth}, then \begin{equation}
\label{eq:F02}
\exists F\in\CC^{0,2}_b(\W_T)\cap\CC^{0,0}(\W_T),\quad F(t,S_t)=\EE^{\PP}[H(S_T)|\F_t]\quad \mathrm{d} t\times \mathrm{d}\PP\text{-a.s.} \end{equation} \end{proposition}
\proof The first step is to construct analytically a regular \naf\ representation $F:\L_T\mapsto\R$ of the claim price, then the properties of regularity and vertical smoothness of $F$ will follow from the conditions of the payoff $H$.
By Theorem 1.3.4 in \cite{str-var} on the existence of regular conditional distributions, for any $t\in[0,T]$ there exists a regular conditional distribution $\{\PP^{(t,\w)},\,\w\in\O\}$ of $\PP$ given the (countably generated) sub-$\s$-algebra $\F_t\subset\F$, i.e. a family of probability measures $\PP^{(t,\w)}$ on $(\O,\F)$ such that \begin{enumerate} \item $\forall B\in\F$, the map $\O\ni\w\mapsto\PP^{(t,\w)}(B)\in[0,1]$ is $\F_t$-measurable; \item $\forall A\in\F_t,\forall B\in\F$, $\PP(A\cap B)=\int_A\PP^{(t,\w)}(B)\PP(\mathrm{d}\w)$; \item $\forall A\in\F_t, \forall\w\in\O$, $\PP^{(t,\w)}(A)=\ind_A(\w)$. \end{enumerate} Moreover, for any random variable $Z\in L^1(\O,\F,\PP)$, it holds
$$\EE^{\PP^{(t,\w)}}[|Z|]<\infty\text{ and }\EE^{\PP}\left[Z|\F_t\right](\w)=\EE^{\PP^{(t,\w)}}[Z]\text{ for }\PP\text{-almost all }\w\in\O.$$ By taking $Z=H(S_T)$, since $\PP^{(t,\w)}$ is concentrated on the subspace $\O^{(t,\w)}:=\{\w'\in\O:\w'_t=\w_t\}$, we can rewrite $\EE^{\PP^{(t,\w)}}[H(S_T)]=\EE^{\PP^{(t,\w)}}[H(\w\underset{t}{\oplus} S_T)]$.
For any $t\in[0,T],x>0$, we denote $\PP^{(t,x)}$ the law of the stochastic process $x\ind_{[0,t)}+S^{(t,x)}\ind_{[t,T]}$ on $(\O,\F,\PP)$, where $\{S^{(t,x)}(u)\}_{u\in[t,T]}$ is defined by \begin{equation}
\label{eq:Seps}
S^{(t,x)}(u)= x+\int_t^u\s(r)S^{(t,x)}(r)\mathrm{d} W(r),\quad u\in[t,T]. \end{equation}
Note that $S$ has the same law under $\PP^{(t,x+\eps)}$ that $S\lf1+\frac\eps x\rg$ has under $\PP^{(t,x)}$. Indeed: \begin{align*} S^{(t,x+\eps)}={}&\lf x+\eps+\int_t^\cdot\s(u)S^{(t,x+\eps)}(u)\mathrm{d} W(u)\rg\ind_{[t,T]} \\ ={}&(x+\eps)e^{\int_t^\cdot\s(s)\mathrm{d} W(s)-\frac12\int_t^\cdot\s^2(u)\mathrm{d} u}\ind_{[t,T]}\\ ={}&S^{(t,x)}\lf1+\frac\eps x\rg, \end{align*} hence we have the following identities in law \begin{align*} \mathrm{Law}(S,\PP^{(t,x+\eps)})={}&\mathrm{Law}\lf(x+\eps)\ind_{[0,t)}+S^{(t,x+\eps)}\ind_{[t,T]},\PP\rg\\ ={}&\mathrm{Law}\lf\lf x\ind_{[0,t)}+S^{(t,x)}\ind_{[t,T]}\rg\lf1+\frac\eps x\rg,\PP\rg\\ ={}&\mathrm{Law}\lf S\lf1+\frac\eps x\rg,\PP^{(t,x)}\rg. \end{align*} Then, consider the \naf\ $F:\L_T\rightarrow\R$ defined by, for all $(t,\w)\in\L_T$, \begin{align} F(t,\w)={}&\EE^{\PP^{(t,\w(t))}}\left[H\lf\w\conc{t}S_T\rg\right] \label{eq:Fw}\\ ={}&\EE^{\PP}\left[H\lf \w\conc{t}\w(t)e^{\int_t^\cdot\s(s)\mathrm{d} W(s)-\frac12\int_t^\cdot\s^2(u)\mathrm{d} u}\ind_{[t,T]} \rg\right]. \nonumber \end{align} If computed respectively on a continuous stopped path $(t,\w)\in\W_T$ and on its vertical perturbation in $t$ of size $\eps$, it gives
$$F(t,\w)=\EE^{\PP^{(t,\w)}}\left[H\lf \w\conc{t}S_T \rg\right]=\EE^{\PP}\left[H(S_T)|\F_t\right](\w)\quad \PP\text{-a.s.},$$ $$F(t,\w^\eps_t)=\EE^{\PP^{(t,\w(t)+\eps)}}\left[H\lf \w\conc{t}S_T \rg\right]=\EE^{\PP^{(t,\w)}}\left[H\lf\w\conc{t}\lf S_T\lf1+\frac\eps{\w(t)}\rg\rg\rg\right].$$
Since $H$ is locally Lipschitz continuous, given $(t,\w)\in[0,T]\times C([0,T],\R_+)$, there exist $\y=\y(\w)>0$ and $K_\w\geq 0$ such that
$$\|\w -\w'\|_\infty \leq\y(\w) \quad \Rightarrow\quad |H(\omega)-H(\omega')| \leq K_\omega \|\w -\w'\|_\infty.$$
Now, we prove the joint-continuity, by showing the computation for the right side - the other being analogous because of symmetric properties; this also proves continuity at fixed times. So, given $(t,\w)\in\W_T$, for $t'\in[t,T]$, $(t',\w')\in\W_T$ such that $\dinf((t,\w),(t',\w'))\leq\y$, then: \begin{align*} &\!\!\!\!\!\!\!\! \abs{F(t,\w)-F(t',\w')}=\\ ={}&\abs{\EE^{\PP^{(t,\w)}}\left[H\lf\w\conc{t}S_T\rg\right]-\EE^{\PP^{(t',\w')}}\left[H\lf\w'\conc{t'}S_T\rg\right]} \\ ={}& \EE^{\PP}\left[\left\lvert H\lf\w\ind_{[0,t)}+\w(t)e^{\int_{t}^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_{t}^{\cdot}\s^2(u)\mathrm{d} u}\ind_{[t,T]}\rg \right.\right.\\
&\left.\left.\quad\quad\quad\quad -H\lf\w'\ind_{[0,t')}+\w'(t')e^{\int_{t'}^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_{t'}^{\cdot}\s^2(u)\mathrm{d} u}\ind_{[t',T]}\rg \right\rvert\right]\\ \leq{}&\,K_\w\,\EE^{\PP}\left[\norm{(\w-\w')\ind_{[0,t)}}_\infty \right. +\norm{\big(\w(t)e^{\int_t^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_t^{\cdot}\s^2(u)\mathrm{d} u}-\w'\big)\ind_{[t,t')}}_\infty \\ &\,\left.+\norm{\big(\w(t)e^{\int_t^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_t^{\cdot}\s^2(u)\mathrm{d} u}-\w'(t')e^{\int_{t'}^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_{t'}^{\cdot}\s^2(u)\mathrm{d} u}\big)\ind_{[t',T]}}_\infty\right] \\
\leq{}&\,K_\w \lf\y+|\w(t)|\EE^{\PP}\left[\norm{\big(e^{\int_t^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_t^{\cdot}\s^2(u)\mathrm{d} u}-1\big)\ind_{[t,t')}}_\infty\right]+\y \right.\\
&{}+|\w(t)|\EE^{\PP}\left[\norm{e^{\int_{t'}^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_{t'}^{\cdot}\s^2(u)\mathrm{d} u}\ind_{[t',T)}}_\infty\abs{e^{\int_t^{t'}\s(u)\mathrm{d} W(u)-\frac12\int_t^{t'}\s^2(u)\mathrm{d} u}-1}\right] \\ &\left. +\y\EE^{\PP}\left[\norm{e^{\int_{t'}^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_{t'}^{\cdot}\s^2(u)\mathrm{d} u}\ind_{[t',T)}}_\infty\right] \rg \end{align*}
\begin{align}
\leq{}&K_\w\left[ 2\y+|\w(t)|\lf\EE^{\PP}\bigg[\sup_{s\in[t,t')}\abs{e^{\int_t^s\s(u)\mathrm{d} W(u)-\frac12\int_t^s\s^2(u)\mathrm{d} u}-1}\bigg] \right.\right. \nonumber\\
&\left.{} +\EE^{\PP}\bigg[\sup_{s\in[t',T)}\abs{e^{\int_{t'}^s\s(u)\mathrm{d} W(u)-\frac12\int_{t'}^s\s^2(u)\mathrm{d} u}}\bigg]\EE^{\PP}\left[\abs{e^{\int_t^{t'}\s(u)\mathrm{d} W(u)-\frac12\int_t^{t'}\s^2(u)\mathrm{d} u}-1}\right] \rg \nonumber\\
&\left.{} +\y\EE^{\PP}\bigg[\sup_{s\in[t',T)}\abs{e^{\int_{t'}^s\s(u)\mathrm{d} W(u)-\frac12\int_{t'}^s\s^2(u)\mathrm{d} u}}\bigg]\right]\label{eq:jc-1} \end{align} The first and third expectations in \eq{jc-1} go to 0 as $t'$ tends to $t$, indeed:
\begin{align*} 0\leq{}&\EE^{\PP}\left[\abs{e^{\int_t^{t'}\s(u)\mathrm{d} W(u)-\frac12\int_t^{t'}\s^2(u)\mathrm{d} u}-1}\right]\\ \leq{}&\EE^{\PP}\bigg[\sup_{s\in[t,t')}\abs{e^{\int_t^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_t^{\cdot}\s^2(u)\mathrm{d} u}-1}\bigg] \\ \leq{}&\EE^{\PP}\bigg[\sup_{s\in[t,t')}\abs{e^{\int_t^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_t^{\cdot}\s^2(u)\mathrm{d} u}-1}^2\bigg]^{\frac12},\text{ by H\"older's inequality} \\ \leq{}&2\EE^{\PP}\left[\abs{e^{\int_t^{t'}\s(u)\mathrm{d} W(u)-\frac12\int_t^{t'}\s^2(u)\mathrm{d} u}-1}^2\right]^{\frac12}, \text{ by Doob's martingale inequality} \\ ={}&2\lf\EE^{\PP}\left[(M(t')-1)^2\right]\rg^{\frac12}\\ ={}&2\sqrt{\EE^{\PP}\Big[[M](t')\Big]}, \end{align*} where $M$ denotes the exponential martingale $$M(s)=e^{\int_t^s\s(u)\mathrm{d} W(u)-\frac12\int_t^s\s(u)^2\mathrm{d} u},\quad s\in[t,T].$$ So, the expectation goes to 0 as $t'$ tends to $t$, by \ass{S}. On the other hand, the second and fourth expectations in \eq{jc-1} are bounded above, again by H\"older's and Doob's martingale inequalities: \begin{align*} \EE^{\PP}\bigg[\sup_{s\in[t',T)}\abs{e^{\int_{t'}^s\s(u)\mathrm{d} W(u)-\frac12\int_{t'}^s\s^2(u)\mathrm{d} u}}\bigg]\leq{}&\EE^{\PP}\bigg[\sup_{s\in[t',T)}e^{2\int_{t'}^s\s(u)\mathrm{d} W(u)-\int_{t'}^s\s^2(u)\mathrm{d} u}\bigg]^{\frac12}\\ \leq{}&2\EE^{\PP}\left[\lf\frac{M(T)}{M(t')}\rg^2\right]^{\frac12}\\ ={}&2\EE^{\PP}\left[\frac{[M](T)}{M(t')}-1\right]^{\frac12}, \end{align*} which is finite by \ass{S}.
The vertical incremental ratio of F is given by \begin{eqnarray*} \frac{F(t,\w^\eps_t)-F(t,\w)}\eps&=&\frac1\eps \EE^{\PP^{(t,\w)}}\left[H\lf \w\conc{t}S_T \lf1+\frac{\eps}{\w(t)}\ind_{[t,T]}\rg \rg - H\lf\w\conc{t}S_T\rg\right]\\ &=&\frac1\eps \EE^{\PP^{(t,\w)}}\left[h\lf\log\lf\frac{\w\conc{t}S_T\lf1+\frac{\eps}{\w(t)}\ind_{[t,T]}\rg}{\w(0)}\rg \rg\right.\\
&&\left.\phantom{\lf\frac{\lf\frac{\eps}{\w(t)}\rg}{\w(0)}\rg}- h\lf\log\lf\frac{\w\conc{t}S_T}{\w(0)}\rg \rg\right]\\ &=&\frac1\eps \EE^{\PP^{(t,\w)}}\left[h\lf\log\lf\frac{\w\conc{t}S_T}{\w(0)}\rg+\log\lf1+\frac\eps{\w(t)}\rg\ind_{[t,T]} \rg \right.\\ &&\left.\qquad\qquad{}- h\lf\log\lf\frac{\w\conc{t}S_T}{\w(0)}\rg \rg\right]. \end{eqnarray*} Then, the vertical smoothness of h allows to use a dominated convergence argument to go to the limit for $\eps$ going to 0 inside the expectation. So we get: \begin{eqnarray*}
\vd F(t,\w)&=&\frac1{\w(t)}\EE^{\PP^{(t,\w)}}\left[\partial_{e}g^h\lf0;t,\log\lf\frac{\w\conc{t}S_T}{\w(0)}\rg\rg\right],\\
\vd^2F(t,\w)&=&\frac1{\w(t)^2}\lf\EE^{\PP^{(t,\w)}}\left[\ppa{e}g^h\lf0;t,\log\lf\frac{\w\conc{t}S_T}{\w(0)}\rg\rg\right]\right.\\ &&\left.\qquad-\EE^{\PP^{(t,\w)}}\left[\partial_{e}g^h\lf0;t,\log\lf\frac{\w\conc{t}S_T}{\w(0)}\rg\rg\right]\rg \end{eqnarray*} The joint continuity of the first and second-order vertical derivative of $F$ are proved similarly, by means of the H\"older condition \eq{gh-lip}. Indeed, if $\dinf((t,\w),(t,\w'))<\eta$, then:
\begin{align}
&\!\!\!\!\!\!\!\! \abs{\vd F(t,\w)-\vd F(t',\w')}=\nonumber\\ ={}&\abs{\frac1{\w(t)}\EE^{\PP^{(t,\w)}}\left[\partial_{e} g^h\lf0;t,\log\lf\frac{\w\conc{t}S_T}{\w(0)}\rg\rg\right]\right. \nonumber \\ &\left.-\frac1{\w'(t')}\EE^{\PP^{(t',\w')}}\left[\partial_{e} g^h\lf0;t',\log\lf\frac{\w'\conc{t'}S_T}{\w'(0)}\rg\rg\right]} \nonumber\\
={}&\frac{1}{\w(t)\w'(t')}\EE^{\PP}\left[\left|\w'(t')\partial_{e}g^h\lf0;t,\log\lf\frac{\w\ind_{[0,t)}+\w(t)e^{\int_{t}^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_{t}^{\cdot}\s^2(u)\mathrm{d} u}\ind_{[t,T]}}{\w(0)}\rg\rg\right.\right. \nonumber\\
&\left.\left.-\w(t)\partial_{e}g^h\lf0;t',\log\lf\frac{\w'\ind_{[0,t')}+\w(t')e^{\int_{t'}^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_{t'}^{\cdot}\s^2(u)\mathrm{d} u}\ind_{[t',T]}}{\w'(0)}\rg\rg\right|\right] \nonumber\\ \leq{}&\frac{1}{\w(t)(\w(t)-\y)}\Bigg\{\EE^{\PP}\left[\y\abs{\partial_{e}g^h\lf0;t,\log\lf\frac{\w\ind_{[0,t)}+\w(t)e^{\int_{t}^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_{t}^{\cdot}\s^2(u)\mathrm{d} u}\ind_{[t,T]}}{\w(0)}\rg\rg}\right] \nonumber\\
&{}+K|\w(t)|\lf|t'-t|^\b+\norm{\lf\log\frac\w{\w(0)}-\log\frac{\w'}{\w'(0)}\rg\ind_{[0,t)}}_\infty\right.\nonumber\\ &{}+\EE^{\PP}\Bigg[\norm{\lf\log\lf\frac{\w(t)}{\w(0)}e^{\int_t^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_t^{\cdot}\s^2(u)\mathrm{d} u}\rg-\log\frac{\w'}{\w'(0)}\rg\ind_{[t,t')}}_\infty \nonumber\\ &\left.{}+\left\lVert \lf\log\lf\frac{\w(t)}{\w(0)}e^{\int_t^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_t^{\cdot}\s^2(u)\mathrm{d} u}\rg-\log\lf\frac{\w'(t')}{\w'(0)}e^{\int_{t'}^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_{t'}^{\cdot}\s^2(u)\mathrm{d} u}\rg\rg\ind_{[t',T]}\right\rVert_\infty \bigg]\rg\Bigg\} \nonumber\\
\leq{}&\frac{1}{\w(t)(\w(t)-\y)}\Bigg\{\y C_1 +K|\w(t)|\lf|t'-t|^\b+2\y'\right. \label{eq:jc-3}\\ &{}+\EE^{\PP}\left[\norm{\lf\int_t^{\cdot}\s(u)\mathrm{d} W(u)-\frac12\int_t^{\cdot}\s^2(u)\mathrm{d} u\rg\ind_{[t,t')}}_\infty\right] \nonumber\\ &{}+\EE^{\PP}\left[\abs{\int_t^{t'}\s(u)\mathrm{d} W(u)-\frac12\int_t^{t'}\s^2(u)\mathrm{d} u}\right]\Bigg\} \nonumber\\
\leq{}&K'\lf\y+|t'-t|^\b+2\y'+3\EE^{\PP}\left[\abs{\int_{t}^{t'}\s(u)\mathrm{d} W(u)}^2\right]^{\frac12}+\bar\s^2(t'-t)\rg \label{eq:jc-4} \end{align} The two constants $C_1$ and $\y'$ in \eq{jc-3} come respectively from the uniform bound on $\partial_{e} g^h$ and from the bound of $\norm{\log\frac\w{\w(0)}-\log\frac{\w'}{\w'(0)}}_\infty$, while to obtain \eq{jc-4} we used the H\"older's and Doob's martingale inequalities. \endproof
\section{Vertical convexity as a condition for robustness} \label{sec:convex}
The path-dependent analogue of the convexity property that plays a role in the analysis of hedging strategies turns out to be the following.
\begin{definition}\label{def:verticalconvex}
A \naf\ $G:\L_T\to\R$ is called \emph{vertically convex on $U\subset\L_T$} if, for all $(t,\w)\in U$, there exists a neighborhood $V\subset\R$ of 0 such that the map $$\bea{rcl}V&\to&\R\\ e&\mapsto&G\lf t,\w+e\ind_{[t,T]}\rg \end{array}$$ is convex. \end{definition} It is readily observed that if $F\in\CC^{0,2}$ is vertically convex on $U$, then $\vd^2F(t,\w)\geq0$ for all $(t,\w)\in U$.
We now provide a sufficient condition on the payoff functional which ensures that the vertically smooth value functional in \eq{F02} is vertically convex. \begin{proposition}[Vertical convexity of pricing functionals]\label{prop:convex}
Assume that, for all $(t,\w)\in\mathbb T\times\supp(S,\PP)$, there exists an interval $\mathcal I\subset\R$, $0\in\mathcal I$, such that the map
\begin{equation} \label{eq:gh}
\bea{rcl} v^H(\cdot;t,\w):\mathcal I&\to&\R,\\
e&\mapsto&v^H(e;t,\w)=H\lf\w(1+e\ind_{[t,T]})\rg
\end{array}
\end{equation} is convex. If the value functional $F$ defined in \eq{Fw} is of class $\CC^{0,2}(\W_T)$, then it is vertically convex on $\mathbb T\times\supp(S,\PP)$. In particular: \begin{equation}\label{eq:vd2F} \forall(t,\w)\in\mathbb T\times\supp(S,\PP),\quad \vd^2F(t,\w)\geq0. \end{equation} \end{proposition} \proof We only need to show that convexity of the map in \eq{gh} is inherited by the map $e\mapsto F(t,\w_t^e)$, which is also twice differentiable in 0 by assumption, hence \eq{vd2F} follows. A simple way of proving convexity of a continuous function is through the property of Wright-convexity, introduced by \citet{wright} in 1954. Precisely, we want to prove that for every $(t,\w)\in\mathbb T\times\supp(S,\PP)$, for all $\eps,e>0$ such that $\frac{e}{\w(t)},\frac{e+\eps}{\w(t)}\in\mathcal I$, the map $$\I'\to\R,\quad e\mapsto F(t,\w^{e+\eps}_t)-F(t,\w^e_t)$$ is increasing: \begin{align*} F(t,\w^{e+\eps}_t)-F(t,\w^e_t)={}&\EE^{\PP^{(t,\w)}}\left[H\lf \lf\w\conc{t}S_T\rg\lf1+\frac{e+\eps}{\w(t)}\ind_{[t,T]}\rg \rg \right.\\ &\left.\quad\quad\quad- H\lf \lf\w\conc{t}S_T\rg\lf1+\frac{e}{\w(t)}\ind_{[t,T]}\rg \rg\right]\\ ={}&\EE^{\PP^{(t,\w)}}\left[v^H\lf\frac{e+\eps}{\w(t)};t,\w\conc{t}S_T\rg-v^H\lf\frac{e}{\w(t)};t,\w\conc{t}S_T\rg\right]. \end{align*} Since $v^H(\cdot;t,\w)$ is continuous and convex, hence Wright-convex, on $\I$, the random variable inside the expectation is pathwise increasing in $e$. Hence also $\mathcal I'\ni e\mapsto F(t,\w_t^e)$ is Wright-convex, where $\I':=\w(t)\I\subset\R$, $0\in\mathcal I'$. Therefore, $F$ is vertically convex. Moreover, since $F\in\CC^{0,2}(\W_T)$, \defin{verticalconvex} implies that $$\forall(t,\w)\in\mathbb T\times\supp(S\PP),\quad \vd^2F(t,\w)\geq0.$$ \endproof
\begin{remark}
If there exists an interval $\mathcal I\subset\R$, $B\lf0,\frac{\abs{\De\w(t)}}{\w(t)}\rg\subset\mathcal I$, such that the map $v^H(\cdot;t,\w)$ defined in \eq{gh} is convex, then \begin{equation}\label{eq:vd2F-jumps} \vd^2F(t,\w_{t-}+\x\ind_{[t,T]})\geq0\quad\forall\xi\in B(0,\abs{\De\w(t)}). \end{equation}
\end{remark}
\section{A model with path-dependent volatility: Hobson-Rogers} \label{sec:HR} \sectionmark{A model with path-dependent volatility: Hobson-Rogers}
In the model proposed by \citet{hobson-rogers}, under the market probability $\tilde\PP$, the discounted log-price process $Z$, $Z(t)=\log S(t)$ for all $t\in[0,T]$, is assumed to solve the stochastic differential equation $$\frac{\mathrm{d} Z(t)}{Z(t)}=\s(t,Z_t)\mathrm{d} \tilde W(t)+\mu(t,Z_t)\mathrm{d} t,$$ where $\tilde W$ is a $\tilde\PP$-Brownian motion and $\s,\mu$ are non anticipative functionals of the process itself, which can be rewritten as Lipschitz-continuous functions of the current time, price and offset functionals of order up to $n$: $$\bea{c} \s(t,\w)=\s^n(t,\w(t),o^{(1)}(t,\w),\ldots,o^{(n)}(t,\w)),\\ \mu(t,\w)=\mu^n(t,\w(t),o^{(1)}(t,\w),\ldots,o^{(n)}(t,\w)),\\ o^{(m)}(t,\w)=\int_0^\infty \l e^{-\l u}(\w(t)-\w(t-u))^m\mathrm{d} u,\quad m=1,\ldots,n. \end{array}$$ Note that, in the original formulation in \cite{hobson-rogers}, the authors take into account the interest rate and denote by $Z(t)=\log(S(t)e^{-rt})$ the discounted log-price. We use the same notation for the forward log-prices instead.
Even if the coefficients of the SDE are path-dependent functionals, \cite{hobson-rogers} proved that the $n+1$-dimensional process $(Z,O^{(1)},\ldots,O^{(n)})$ composed of the price process and the offset processes up to order $n$, $O^{(m)}(t):=o^{(m)}(t,Z_t)$, is a Markov process. In the special case $n=1$ and $\s^n(t,x,o)=\s^n(o)$, $\mu^n(t,x,o)=\mu^n(o)$, denoted $O:=O^{(1)}$, they proved the existence of an equivalent martingale measure $\PP$ defined by $${\frac{\mathrm{d}\PP}{\mathrm{d}\tilde\PP}}\rvert_{\F_t}=\exp\left\{-\int_0^t\th(O(u))\mathrm{d} W(u)-\frac12\int_0^t\th(O(u))^2\mathrm{d} u\right\},$$ where $\th(o)=\frac12\s^n(o)+\frac{\mu^n(o)}{\s^n(o)}$. Then, the offset process solves \begin{eqnarray*}
\mathrm{d} O(t)&=&\s^n(O(t))\mathrm{d}\tilde W(t)+(\mu^n(O(t))-\l O(t))\mathrm{d} t\\ &=&\s^n(O(t))\mathrm{d} W(t)-\frac12(\s^n(O(t))^2+\l O(t))\mathrm{d} t, \end{eqnarray*} where $W$ is the $\PP$-Brownian motion defined by $W(t)=\tilde W(t)+\int_0^t\th(O(u))\mathrm{d} u$. So, the (forward) price process solves \begin{equation}\label{eq:HR} \mathrm{d} S(t)=S(t)\s^n(O(t))\mathrm{d} W(t), \end{equation} where $W$ is a standard Brownian motion on $(\O,\F,\FF,\PP)$ and $\s^n:\R\to\R$ is a Lipschitz-continuous function, satisfying some integrability conditions such that the correspondent pricing PDEs admit a classical solution.
The price of a European contingent claim with payoff $H(S(T))$, satisfying appropriate integrability and growth conditions, is given by ,for all $(t,\w)\in\W_T$, $$F(t,\w)=f(t,\w(t),o(t,\w)),\quad o(t,\w)=\int_0^\infty\l e^{-\l u}(\w(t)-\w(t-u))\mathrm{d} u,$$ where $f$ is the solution $f\in C^{1,2,2}([0.T)\times\R_+\times\R)\cap\C([0.T]\times\R_+\times\R)$ of the partial differential equation on $[0,T)\times\R_+\times\R$ $$\frac{\s^n(o)^2}2(x^2\partial^2_{xx}f+2x\partial_{xo}f+\partial_{oo}f)-\lf\frac12\s^n(o)^2+\l o\rg\partial_o f+\partial_{t}f=0,$$ where $f\equiv f(t,x,o)$, with final datum $f(T,x,o)=H(x)$. Using a change of variable, the pricing problem simplifies to solving the following degenerate PDE on $[0,T]\times\R\times\R$: \begin{equation}\label{eq:pde-HR} \frac12\s^n(x_1-x_2)^2(\partial_{x_1x_1}u-\partial_{x_1}u)+\l(x_1-x_2)\partial_{x_2}u-\partial_t u=0, \end{equation} where $u\equiv u(T-t,x_1,x_2)=f(t,e^{x_1},x_1-x_2)$, with initial condition $u(0,x_1,x_2)=H(e^{x_1})$. Note that the pricing PDE~\eq{pde-HR} reduces to the universal pricing equation~\eq{fpde}, where, for all $(t,\w)\in\W_T$, $$F(t,\w)=u(T-t,\log\w(t),\log\w(t)-o(t,\w)),$$ and
\begin{eqnarray*} \hd F(t,\w)&=&-\partial_t u(T-t,\log\w(t),\log\w(t)-o(t,\w))\\ &&{}+\l\partial_{x_2}u(T-t,\log\w(t),\log\w(t)-o(t,\w)), \\ \vd F(t,\w)&=&\partial_{x_1}u(T-t,\log\w(t),\log\w(t)-o(t,\w)),\\ \quad\vd^2 F(t,\w)&=&\frac1{\w(t)^2}(\partial_{x_1x_1}u(T-t,\log\w(t),\log\w(t)-o(t,\w))\\ &&\quad\quad{}-\partial_{x_1}u(T-t,\log\w(t),\log\w(t)-o(t,\w)). \end{eqnarray*}
\section{Examples} \label{sec:ex}
We now show how the above results apply to specific examples of hedging strategies for path-dependent derivatives.
\subsection{Discretely-monitored path-dependent derivatives} \label{sec:discr}
The simplest class of path-dependent derivatives are those which are discretely-monitored. The robustness of delta-hedging strategies for discretely-monitored path-dependent derivatives was studied in \cite{ss} as shown in \Sec{ss}. In the case of a Black-Scholes pricing model with time-dependent volatility, we show such results may be derived, without probabilistic assumptions on the true price dynamics, as a special case of the results presented above, and we obtain explicit expressions for the first and second order sensitivities of the pricing functional (see also Cont and Yi [9]).
The following lemma describes the regularity of pricing functionals for discretely-monitored options in a Black-Scholes model with time-dependent volatility $\s:[0,T]\rightarrow\R_+$ such that $\int_0^T\s^2(t)\mathrm{d} t<\infty$. The regularity assumption on the payoff functional is weaker then the ones required for \prop{exist}, thanks to the finite dimension of the problem.
\begin{lemma}[Discretely-monitored path-dependent derivatives]\label{lem:BS}
Let $H:D([0,T],\R_+)$ and assume that there exist a partition $0=t_0<t_1<\ldots<t_n\leq T$ and a function $h\in C^2_b(\R^n;\R_+)$ such that $$\forall \w\in D([0,T],\R_+),\quad H(\w_T)=h(\w(t_1),\w(t_2),\ldots,\w(t_n)).$$ Then, the \naf\ $F$ defined in \eq{Fw} is locally regular, that is $F\in\Cloc(\W_T)$, with horizontal and vertical derivatives given in a closed form. \end{lemma} \proof For any $\w\in\O$ and $t\in[0,T]$, let us denote $\bar k\equiv\bar k(n,t):=\max\{i\in\{1,\ldots,n\}\;:\;t_i\leq t\}$, then for $s$ small enough $t+s\in[t_{\bar k},t_{\bar k+1})$ and we have \begin{align*} &\!\!\!\!F(t+s,\w_t)-F(t,\w_t)\\
={}&\EE^{\QQ}\left[H\lf \w(t_1),\ldots,\w(t_{\bar k}),\w(t)e^{\int_{t+s}^{t_{\bar k+1}}\s(u)\mathrm{d} W(u)-\frac12\int_{t+s}^{t_{\bar k+1}}\s^2(u)\mathrm{d} u},\ldots,\right.\right.\\ &\qquad\qquad\left.\w(t)e^{\int_{t+s}^{t_{n}}\s(u)\mathrm{d} W(u)-\frac12\int_{t+s}^{t_n}\s^2(u)\mathrm{d} u}\rg+{}\\ &\quad\quad {}-H\lf\w(t_1),\ldots,\w(t_{\bar k}),\w(t)e^{\int_{t}^{t_{\bar k+1}}\s(u)\mathrm{d} W(u)-\frac12\int_{t}^{t_{\bar k+1}}\s^2(u)\mathrm{d} u},\ldots,\right.\\ &\qquad\qquad\left.\w(t)e^{\int_{t}^{t_{n}}\s(u)\mathrm{d} W(u)-\frac12\int_{t}^{t_n}\s^2(u)\mathrm{d} u}\rg\bigg]\\ ={}&\idotsint H\lf\w(t_1),\ldots,\w(t_{\bar k}),\w(t)e^{y_1},\ldots,\w(t)e^{y_{n-\bar k}}\rg\prod_{i=1}^{n-\bar k}\frac{e^{-\frac{\lf y_i+\frac12\int_{t+s}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u\rg^2}{2\int_{t+s}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u}}}{\sqrt{2\pi\int_{t+s}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u}}\mathrm{d} y_i \\ &{}- \idotsint H\lf\w(t_1),\ldots,\w(t_{\bar k}),\w(t)e^{y_1},\ldots,\w(t)e^{y_{n-\bar k}}\rg\prod_{i=1}^{n-\bar k}\frac{e^{-\frac{\lf y_i+\frac12\int_{t}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u\rg^2}{2\int_{t}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u}}}{\sqrt{2\pi\int_{t}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u}}\mathrm{d} y_i . \end{align*} By denoting $$v_i(s):=\frac{e^{-\frac{\lf y_i+\frac12\int_{t+s}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u\rg^2}{2\int_{t+s}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u}}}{\sqrt{2\pi\int_{t+s}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u}},\quad i=1,\ldots,n-\bar k,$$ dividing by $s$ and taking the limit for $s$ going to 0, we obtain \begin{align} \hd F(t,\w)={}&\lim_{s\rightarrow0}\frac{F(t+s,\w_t)-F(t,\w_t)}s \nonumber\\ ={}&\sum_{j=1}^{n-\bar k}\idotsint H\lf\w(t_1),\ldots,\w(t_{\bar k}),\w(t)e^{y_1},\ldots,\w(t)e^{y_{n-\bar k}}\rg\!\!\!\!\prod_{\bea{c}\scriptstyle{i=1,\ldots,n-\bar k}\\\scriptstyle{i\neq j} \end{array}}\!\!\!\!\!\!v_j'(0)v_i(0)\mathrm{d} y_i\mathrm{d} y_j, \end{align} where, for $i=1,\ldots,n-\bar k$, $$\bea{l}v_i'(0)=\frac{v_i(0)\s^2(t)}{2\lf \int_{t}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u\rg^2}\left( \left( y_i+\frac12\int_{t}^{t_{\bar k+i}}\s^2\mathrm{d} u\rg^{\phantom{1}}\!\int_{t}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u\right.\\ \left.\quad\qquad\qquad\qquad\qquad\qquad-\lf y_i+\frac12\int_{t}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u\rg^2+\int_{t}^{t_{\bar k+i}}\s^2(u)\mathrm{d} u \rg. \end{array} $$ Moreover, the first and second vertical derivatives are explicitly computed as: \begin{align}
\vd F(t,\w)={}&\sum_{j=1}^{n-k}\idotsint \partial_{k+j} H\lf\w(t_1),\ldots,\w(t_{k}),\w(t)e^{y_1},\ldots,\w(t)e^{y_{n-\bar k}}\rg e^{y_j}\prod_{i=1}^{n-k}v_i(0)\mathrm{d} y_i,\\
\vd^2F(t,\w)={}&\sum_{i,j=1}^{n-k}\idotsint\partial_{k+i,k+j} H\lf\w(t_1),\ldots,\w(t_{k}),\w(t)e^{y_1},\ldots,\w(t)e^{y_{n-\bar k}}\rg e^{y_i+y_j}\prod_{l=1}^{n-k}v_l(0)\mathrm{d} y_l, \end{align} where $k\equiv k(n,t):=\max\{i\in\{1,\ldots,n\}\;:\;t_i<t\}$. \endproof
\subsection{Robust hedging for Asian options} \label{sec:asian}
Asian options, which are options on the average price computed across a certain fixing period, are commonly traded in currency and commodities markets. The payoff of Asian options depends on an average of prices during the lifetime of the option, which can be of two types: an arithmetic average $$M^A(T)=\int_0^TS(u)\mu(\mathrm{d} u),$$ or a geometric average $$M^G(T)=\int_0^T\log S(u)\mu(\mathrm{d} u).$$ We consider Asian call options with date of maturity $T$, whose payoff is given by a continuous functional on $(D([0,T],\R),\norm{\cdot}_\infty)$: $$\bea{ll} H^A(S_T)=(M^A(T)-K)^+=:\Psi^A(S(T),M^A(T))&\text{arithmetic Asian call},\\ H^G(S_T)=(e^{M^G(T)}-K)^+=:\Psi^G(S(T),M^G(T))&\text{geometric Asian call}. \end{array}$$ Various weighting schemes may be considered: \begin{itemize} \item if $\mu(\mathrm{d} u)=\d_{\{T\}}(\mathrm{d} u)$, we reduce to an European option, with strike price $K$; \item if $\mu(\mathrm{d} u)=\frac1T\ind_{[0,T]}(u)\mathrm{d} u$, we have a \textit{fixed strike} Asian option, with strike price $K$; \item in the arithmetic case, if $\mu(\mathrm{d} u)=\d_{\{T\}}(\mathrm{d} u)-\frac1T\ind_{[0,T]}(u)\mathrm{d} u$ and $K=0$, we have a \textit{floating strike} Asian option; the geometric floating strike Asian call has instead payoff $(S(T)-e^{M^G(T)})^+$ with $\mu(\mathrm{d} u)=\frac1T\ind_{[0,T]}(u)\mathrm{d} u$. \end{itemize} Here, we consider the hedging strategies for fixed strike Asian options, first in a Black-Scholes pricing model, where the volatility is a deterministic function of time, then in a model with path-dependent volatility, the Hobson-Rogers model introduced in \Sec{HR}. First, we show that these models admit a smooth pricing functional. Then, we show that the assumptions of \prop{convex} are satisfied, which leads to robustness of the hedging strategy.
\subsubsection{Black-Scholes delta-hedging for Asian options} \label{sec:BS-asian}
In the Black-Scholes model, the value functional of such options can be computed in terms of a standard function of three variables (see e.g. \cite[Section 7.6]{pascucci}). In the arithmetic case: for all $(t,\w)\in\W_T$, \begin{equation}\label{eq:Ff-BS-arit} F(t,\w)=f(t,\w(t),a(t,\w)),\quad a(t,\w)=\int_0^t\w(s)\mathrm{d} s, \end{equation} where $f\in C^{1,2,2}([0.T)\times\R_+\times\R_+)\cap\C([0.T]\times\R_+\times\R_+)$ is the solution of the following Cauchy problem with final datum: \begin{equation} \label{eq:asianpde-BS-arit} \begin{cases} \frac{\s^2(t)x^2}2\partial^2_{xx}f(t,x,a)+x\partial_{a}f(t,x,a)+\partial_{t}f(t,x,a)=0,&t\in[0,T),\,a,x\in\R_+\\ f(T,x,a)=\Psi^A\lf x,\frac{a}{T}\rg.& \end{cases} \end{equation} Different parametrizations were suggested in order to facilitate the computation of the solution, which is however not in a closed form. For example, \cite{dupire} shows a different characterization which improves the numerical discretization of the problem, while \cite{rogershi} reduces the pricing issue to the solution of a parabolic PDE in two variable, thus decreasing the dimension of the problem, as done in \cite{ingersoll} for the case of a floating-strike Asian option.
In the geometric case: for all $(t,\w)\in\W_T$, \begin{equation}\label{eq:Ff-BS-geom} F(t,\w)=f(t,\w(t),g(t,\w)),\quad g(t,\w)=\int_0^t\log\w(s)\mathrm{d} s, \end{equation} where $f\in C^{1,2,2}([0.T)\times\R_+\times\R)\cap\C([0.T]\times\R_+\times\R)$ is the solution of the following Cauchy problem with final datum: for $t\in[0,T)$, $x\in\R_+$, $g\in\R$, \begin{equation} \label{eq:asianpde-BS-geom} \begin{cases} \frac{\s^2(t)x^2}2\partial_{xx}^2f(t,x,g)+\log x\partial_{g}f(t,x,g)+\partial_{t}f(t,x,g)=0,\\ f(T,x,g)=\Psi^G\lf x,\frac{g}{T}\rg. \end{cases} \end{equation} As in the arithmetic case, the dimension of the problem \eq{asianpde-BS-geom} can be reduced to two by a change of variable. Moreover, in this case, it is possible to obtain a Kolmogorov equation associated to a degenerate parabolic operator that has a Gaussian fundamental solution.
We remark that the pricing PDEs~\eq{asianpde-BS-arit},\eq{asianpde-BS-geom} are both equivalent to the functional partial differential equation~\eq{fpde} for $F$ defined respectively by \eq{Ff-BS-arit} and \eq{Ff-BS-geom}. Indeed, computing the horizontal and vertical derivatives of $F$ yields $$\bea{l} \hd F(t,\w)=\partial_t f(t,\w(t),a(t,\w))+\w(t)\partial_a f(t,\w(t),a(t,\w)), \\ \vd F(t,\w)=\partial_x f(t,\w(t),a(t,\w)),\quad\vd^2 F(t,\w)=\partial_{xx}^2 f(t,\w(t),a(t,\w)) \end{array} $$ for the arithmetic case, and $$\bea{l} \hd F(t,\w)=\partial_t f(t,\w(t),g(t,\w))+\log\w(t)\partial_g f(t,\w(t),g(t,\w)), \\ \vd F(t,\w)=\partial_x f(t,\w(t),g(t,\w)),\quad\vd^2 F(t,\w)=\partial_{xx}^2 f(t,\w(t),g(t,\w)) \end{array} $$ for the geometric case.
Thus, the standard pricing problems for the arithmetic and geometric Asian call options turn out to be particular cases of \prop{hedge}, with $A=\s^2\w^2$. In particular, the delta-hedging strategy is given by \begin{align*} \phi(t,\w)=\vd F(t,\w)={}&\partial_{x} f(t,\w(t),a(t,\w))\quad\text{(arithmetic), or}\\ ={}&\partial_{x} f(t,\w(t),g(t,\w))\quad\text{(geometric)}. \end{align*}
The following claim is an application of \prop{convex}. \begin{corollary}\label{cor:BS-robust}
If the Black-Scholes volatility term structure over-estimates the realized market volatility, i.e. $$\s(t)\geq\s^{\mathrm{mkt}}(t,\w)\quad \forall\w\in \A\cap\supp(S,\PP)$$ then the Black-Scholes delta hedges for the Asian options with payoff functionals $$\bea{ll} H^A(S_T)=(\frac1T\int_0^TS(t)\mathrm{d} t-K)^+&\text{arithmetic Asian call},\\ H^G(S_T)=(e^{\frac1T\int_0^T\log S(t)\mathrm{d} t}-K)^+&\text{geometric Asian call}, \end{array}$$ are robust on $\A\cap\supp(S,\PP)$. Moreover, the hedging error at maturity is given by $$\frac12\int_0^T \lf{\s}(t)^2-\s^{\mathrm{mkt}}(t,\w)^2\rg\w^2(t) \ppa{x}f \mathrm{d} t,$$ where $f$ stays for, respectively, $f(t,\w(t),a(t,\w))$ solving the Cauchy problem \eq{asianpde-BS-arit}, and $f(t,\w(t),g(t,\w))$ solving the Cauchy problem \eq{asianpde-BS-geom}. \end{corollary} Let us emphasize again that the hedger's profit-and-loss depends explicitly on the Gamma of the option and on the distance of the Black-Scholes volatility from the realized volatility during the lifetime of the contract.
\proof The integrability of $H^A,H^G$ in $(\O,\PP)$ follows from the Feynman-Kac representation of the solution of the Cauchy problems with final datum \eq{asianpde-BS-arit}, \eq{asianpde-BS-geom}.
By the functional representation in~\eq{Ff-BS-arit}, respectively \eq{Ff-BS-geom}, the pricing functional $F$ is smooth, i.e. it satisfies \eq{regF}. If the assumptions of \prop{convex} are satisfied, we can thus apply \prop{robust} to prove the robustness property. We have to check the convexity of the map $v^H(\cdot;t,\w)$ in \eq{gh} for all $(t,\w)\in[0,T]\times Q(\O,\Pi)$. Concerning the arithmetic Asian call option, we have: \begin{align*}
v^{H^A}(e;t,\w)={}&H^A\lf\w(1+e\ind_{[t,T]})\rg\\ ={}&\lf\frac1T\lf\int_0^t\w(u)\mathrm{d} u+\int_t^T\w(u)(1+e)\rg-K\rg^+\\ ={}&\lf m(T)+\frac e T(a(T)-a(t))-K \rg^+\\ ={}&\frac{a(T)-a(t)}T\lf e-K'\rg^+, \end{align*} where $m(T)=\frac1T a(T)$ and $K'=\frac{KT-a(T)}{a(T)-a(t)}$, which is clearly convex in $e$.
As for the geometric Asian call option, we have: \begin{align*}
v^{H^G}(e;t,\w)={}&H^G\lf\w(1+e\ind_{[t,T]})\rg\\ ={}&\lf e^{\frac1T\int_0^t\log\w(u)\mathrm{d} u}e^{\frac1T\int_t^T\log(\w(u)(1+e))\mathrm{d} u}-K\rg^+\end{align*} which is a convex function in $e$ around 0, since $\w$ is bounded away from 0 on $[0,T]$. Indeed: $e\mapsto\int_t^T\log(\w(u)(1+e))\mathrm{d} u$ is convex since it is the integral in $u$ of a function of $(u,e)$ which is convex in $e$ by preservation of convexity under affine transformation; then $e\mapsto e^{\frac1T\int_t^T\log(\w(u)(1+e))\mathrm{d} u}$ is convex because it is the composition of a convex increasing function and a convex function. \endproof
\begin{remark} The robustness of the Black-Scholes-delta hedging for the arithmetic Asian option is in fact a direct consequence of \prop{robust}. Indeed, in the Black-Scholes framework, the Gamma of an Asian call option is non-negative, as it has been shown for different closed-form analytic approximations found in the literature. An example can be seen in \cite{mil-posner}, where the density of the arithmetic mean is approximated by a reciprocal gamma distribution which is the limit distribution of an infinite sum of correlated log-normal random variables. This already implies the condition \eq{vd2F}. \end{remark}
\subsubsection{Hobson-Rogers delta-hedging for Asian options} \label{sec:RH}
We have already shown in \Sec{HR} that the Hobson-Rogers model admits a smooth pricing functional for suitable non-path-dependent payoffs. \citet{pascucci-difra} proved that also the problem of pricing and hedging a geometric Asian option can be similarly reduced to a degenerate PDE belonging to the class of Kolmogorov equations, for which a classical solution exists. In this case, the pricing functional can be written as a function of four variables \begin{equation}\label{eq:Fu-geom} F(t,\w)=u(T-t,\log\w(t),\log\w(t)-o(t,\w),g(t,\w)), \end{equation} where $u$ is the classical solution of the following Cauchy problem on $[0,T]\times\R\times\R\times\R$: \begin{equation}\label{eq:asianpde-HR-geom}\begin{cases} \frac12\s^n(x_1-x_2)^2(\partial^2_{x_1x_1}u-\partial_{x_1}u)+\l(x_1-x_2)\partial_{x_2}u+x_1\partial_{x_3}u-\partial_t u=0,\\ u(0,x_1,x_2,x_3)=\Psi^G(e^{x_1},\frac{x_3}T). \end{cases} \end{equation}
The following claim is the analogous of \cor{BS-robust} for the Hobson-Rogers model; the proof is omitted because it follows exactly the same arguments as the proof of \cor{BS-robust}. \begin{corollary}
If the Hobson-Roger volatility in \eq{HR} over-estimates the realized market volatility, i.e. $$\s(t,\w)=\s^n(o(t,\w))\geq\s^{\mathrm{mkt}}(t,\w)\quad \forall\w\in \A\cap\supp(S,\PP)$$ then the Hobson-Rogers delta hedge for the geometric Asian option with payoff functional $$H^G(S_T)=(e^{\frac1T\int_0^T\log S(t)\mathrm{d} t}-K)^+$$ is robust on $\A\cap\supp(S,\PP)$. Moreover, the hedging error at maturity is given by $$\frac12\int_0^T \lf{\s^n}(o(t,\w))^2-\s^{\mathrm{mkt}}(t,\w)^2\rg\w^2(t) \ppa{x}u(T-t,\log\w(t),\log\w(t)-o(t,\w),g(t,\w)) \mathrm{d} t,$$ where $u$ is the solution of the Cauchy problem \eq{asianpde-HR-geom}. \end{corollary}
Other models that generalize Hobson-Rogers and allow to derive finite-dimensional Markovian representation for the price process and its arithmetic mean are given by \citet{pascucci-foschi,salvatore-tankov}. They thus guarantee the existence of a smooth pricing functional for arithmetic Asian options, then robustness of the delta hedge can be proved the same way as we showed in the Black-Scholes and Hobson-Rogers cases.
\subsection{Dynamic hedging of barrier options}
Barrier options are examples of path-dependent derivatives for which delta-hedging strategies are not robust.
Consider the case of an up-and-out barrier call option with strike price $K$ and barrier $U$, whose payoff functional is \begin{equation} \label{eq:barrier}
H(S_T)=(S(T)-K)^+\ind_{\{\overline S(T)<U\}}. \end{equation}
The pricing functional of a barrier option is determined by regular solutions of classical Dirichlet problems, opportunely stopped at the barrier hitting times. The pricing functional for the claim with payoff \eqref{eq:barrier} is given, at time $t\in[0,T]$, by $$F(t,\w)=f(t\wedge \t_U(\w),\w(t\wedge \t_U(\w))),$$ where $\t_U(\w):=\inf\{t\geq0: \w(t)\in[U,+\infty)\}$ and $f$ is the $\C^{1,2}([0,T)\times(0,U))\cap\C([0,T]\times(0,U))$ solution of the following Dirichlet problem: \begin{equation} \label{eq:barrierPDE}
\left\{\begin{array}{ll}
\frac12\s^2(t)x^2\partial_{xx}^2f(t,x)+\partial_{t}f(t,x)=0,& (t,x)\in[0,T)\times (0,U),\\
f(t,U)=0,& t\in[0,T],\\
f(T,x)=H(x),& x\in(0,U).
\end{array}\right. \end{equation} The delta-hedging strategy is then given by $$\phi(t,\w)=\partial_{x} f(t,\w(t))\ind_{[0,\t_U(\w))}(t).$$ Analogously to the application in \Sec{asian}, we can compute the hedging error of the delta hedge for the barrier option. However, unlike for Asian options, the delta hedge for barrier options fails to have the robustness property, because the price collapses at $t=\t_U$, disrupting the positivity of the Gamma. On the other end, the Gamma of barrier options can be quite large in magnitude, so it is crucial to have a good estimate of volatility, in order to keep the hedging error as small as possible. \begin{remark} Let $H$ be the payoff functional of the up-and-out barrier call option with strike price $K$ and barrier $U$ in \eq{barrier}. Then the Black-Scholes delta hedge for $H$ is not robust to volatility mis-specifications. Any mismatch between the model volatility $\s$ and the realized volatility $\s^{mkt}$ is amplified by the Gamma of the option as the barrier is approached and the resulting error can have an arbitrary sign due to the non-constant sign of the option Gamma near the barrier. \end{remark}
The assumptions of \prop{convex} are not satisfied, indeed: for any $(t,\w)\in[0,T]\times\C([0,T],\R_+)$, \begin{align*}
v^H(e;t,\w)={}&(\w(T)+\w(T)e-K)^+\ind_{(0,U)}\lf\sup_{s\in[0,T]}\lf\w(s)(1+e\ind_{[t,T]}(s))\rg\rg\\
={}&\w(T)\lf e-\frac{K-\w(T)}{\w(T)}\rg^+\ind_{(0,U)}(\g(e))\\ ={}&\w(T)\lf e-\frac{K-\w(T)}{\w(T)}\rg^+\ind_{\{\g^{-1}((0,U))\}}(e) \end{align*} where $\g:\R\rightarrow\R_+$, \begin{align*}
\g(e):={}&\sup_{s\in[0,T]}\lf\w(s)(1+e\ind_{[t,T]}(s))\rg\\ ={}&\max\left\{\over\w(t),(1+e)\sup_{s\in[t,T]}\w(s))\right\}\\ ={}&\sup_{s\in[t,T]}\w(s)\lf e-\frac{\over\w(t)-\sup_{s\in[t,T]}\w(s)}{\sup_{s\in[t,T]}\w(s)}\rg^++\over\w(t). \end{align*} $\g^{-1}(A)$ denote the counter-image of $A\subset\R_+$ via $\g$, and $\over\w(t):=\sup_{s\in[0,t]}\w(s)$. Since $\g$ is a positive non-decreasing continuous function, we have $$\g^{-1}((0,U))=\begin{cases}\emptyset,&\text{if }U\leq\over\w(t)\\ \lf-\infty,\frac{U-\sup_{s\in[t,T]}\w(s)}{\sup_{s\in[t,T]}\w(s)}\rg,&\text{otherwise.}\end{cases}$$ Thus, there exist an interval $\mathcal I\subset\R$, $0\in\mathcal I$, such that $v^H(\cdot;t,\w):\mathcal I\rightarrow\R$ is convex if and only if $U>\sup_{s\in[t,T]}\w(s)$. However, \prop{convex} requires the map $v^H(\cdot;t,\w)$ to be convex for all $\w\in\supp(S,\PP)$ in order to imply vertical convex of the value functional.
Thus, we observe that unlike the case of Asian options, delta-hedging strategies do not provide a robust approach to the hedging of barrier options.
\chapter{Adjoint expansions in local L\'evy models}
This chapter is based on a joint work with Stefano Pagliarani and Andrea Pascucci, published in 2013 \cite{ppr}.
Analytical approximations and their applications to finance have been studied by several authors in the last decades because of their great importance in the calibration and risk management processes. The large body of the existing literature (see, for instance, \cite{Hagan99}, \cite{Howison2005}, \cite{WiddicksDuckAndricopoulosNewton2005}, \cite{GatheralHsuLaurenceOuyangWang2010}, \cite{BenhamouGobetMiri2010b}, \cite{CorielliFoschiPascucci2010}, \cite{ChengCostanzinoLiechtyMazzucatoNistor2011}) is mainly devoted to purely diffusive (local and stochastic volatility) models or, as in \cite{BenhamouGobetMiri2009} and \cite{XuZheng2010}, to local volatility (LV) models with Poisson jumps, which can be approximated by Gaussian kernels.
The classical result by Hagan \cite{Hagan99} is a particular case of our expansion, in the sense that for a standard LV model with time-homogeneous coefficients our formulae reduce to Hagan's ones (see \Sec{secsimpl}). While Hagan's results are heuristic, here we also provide explicit error estimates for time-dependent coefficients as well.
The results of \Sec{Merton} on the approximation of the transition density for jump-diffusions are essentially analogous to the results in \cite{BenhamouGobetMiri2009}: however in \cite{BenhamouGobetMiri2009} ad-hoc Malliavin techniques for LV models with Merton jumps are used and only a first order expansion is derived. Here we use different techniques (PDE and Fourier methods) which allows to handle the much more general class of local L\'evy processes: this is a very significant difference from previous research. Moreover we derive higher order approximations, up to the $4^{\text{th}}$ order.
Our approach is also more general than the so-called ``pa\-ra\-me\-trix'' methods recently proposed in \cite{CorielliFoschiPascucci2010} and \cite{ChengCostanzinoLiechtyMazzucatoNistor2011} as an approximation method in finance. The parametrix method is based on repeated application of Duhamel's principle which leads to a recursive integral representation of the fundamental solution: the main problem with the parametrix approach is that, even in the simplest case of a LV model, it is hard to compute explicitly the parametrix approximations of order greater than one. As a matter of fact, \cite{CorielliFoschiPascucci2010} and \cite{ChengCostanzinoLiechtyMazzucatoNistor2011} only contain first order formulae. The adjoint expansion method contains the parametrix approximation {\it as a particular case}, that is at order zero and in the purely diffusive case. However the general construction of the adjoint expansion is substantially different and allows us to find explicit higher-order formulae for the general class of local L\'evy processes.
\section{General framework} \label{sec:sec1}
In a local L\'evy model, we assume that the log-price process $X$ of the underlying asset of interest solves the SDE \begin{equation}\label{X}
\mathrm{d} X(t)=\m(t,X(t-))\mathrm{d} t+\s(t,X(t)) \mathrm{d} W(t)+ \mathrm{d} J(t). \end{equation} In \eqref{X}, $W$ is a standard real Brownian motion on a filtered probability space $(\O,\F,(\F_t)_{0\leq t\leq T},\mathbb{P})$ with the usual assumptions on the filtration and $J$ is a pure-jump L\'evy process, independent of $W$, with L\'evy triplet $(\m_{1},0,\n)$. In order to guarantee the martingale property for the discounted asset price $\tilde{S}(t):=S_{0}e^{X(t)-rt}$, we set \begin{equation}\label{30}
\m(t,x)=\rle-\m_{1}-\frac{\s^{2}(t,x)}{2}, \end{equation} where \begin{equation}\label{31}
\rle=r-\int_{\R}\left(e^{y}-1-y\mathds{1}_{\{|y|<1\}}\right)\n(dy). \end{equation} We denote by
$$X^{t,x}:T\mapsto X^{t,x}(T)$$ the solution of \eqref{X} starting from $x$ at time $t$ and by
$$\p_{X^{t,x}(T)}(\x)=E\left[e^{i\x X^{t,x}(T)}\right],\qquad \x\in\R,$$ the characteristic function of $X^{t,x}(T)$. Provided that $X^{t,x}(T)$ has density $\G(t,x;T,\cdot)$, then its characteristic function is equal to
$$\p_{X^{t,x}(T)}(\x)=
\int_{\R} e^{i\x y}\G(t,x;T,y)dy.$$ Notice that $\G(t,x;T,y)$ is the fundamental solution of the Kolmogorov operator \begin{equation}\label{L} \begin{split}
Lu(t,x)&= \frac{\s^{2}(t,x)}{2}\left({\partial}_{xx}-{\partial}_{x}\right)u(t,x)+\rle{\partial}_{x}u(t,x)+{\partial}_{t}u(t,x)\\
&\quad+\int_{\R}\left(u(t,x+y)-u(t,x)-{\partial}_{x}u(t,x)y\mathds{1}_{\{|y|<1\}}\right)\n(dy). \end{split} \end{equation}
\begin{example}\label{ex3} Let $J$ be a compound Poisson process with Gaussian jumps, that is
$$J(t)=\sum_{n=1}^{N(t)} Z_n $$ where $N(t)$ is a Poisson process with intensity $\l$ and $Z_n$ are i.i.d. random variables independent of $N(t)$ with Normal distribution $\mathcal{N}_{m,\d^{2}}$. In this case, $\n=\lambda\mathcal{N}_{m,\d^{2}}$ and
$$\m_{1}=\int_{|y|<1}y\n(dy).$$ Therefore the drift condition \eqref{30} reduces to \begin{align}\label{30b}
\m(t,x)=r_{0}-\frac{\s^{2}(t,x)}{2}, \end{align} where \begin{equation}\label{30c}
r_{0}=r-\int_{\R}\left(e^{y}-1\right)\n(dy)=r-\lambda\left(e^{m+\frac{\delta^2}2}-1\right). \end{equation} Moreover, the characteristic operator can be written in the equivalent form \begin{equation}\label{LPoi} \begin{split}
L u(t,x)&=\frac{\s^{2}(t,x)}{2}\left({\partial}_{xx}-{\partial}_{x}\right)u(t,x)+r_{0}{\partial}_{x}u(t,x)+{\partial}_{t}u(t,x)\\
&\quad+\int_{\R}\left(u(t,x+y)-u(t,x)\right)\n(dy). \end{split} \end{equation} \end{example} \begin{example}\label{ex4} Let $J$ be a Variance-Gamma process (cf. \cite{MadanSeneta1990}) obtained by subordinating a Brownian motion with drift $\th$ and standard deviation $\r$, by a Gamma process with variance $\kappa$ and unitary mean. In this case the L\'evy measure is given by \begin{equation}\label{70}
\n(dx)=\frac{e^{-\l_{1}x}}{\kappa x}\caratt_{\{x>0\}}dx+\frac{e^{\l_{2}x}}{\kappa|x|}\caratt_{\{x<0\}}dx \end{equation} where
$$\l_{1}=\left(\sqrt{\frac{\th^{2}\kappa^{2}}{4}+\frac{\r^{2}\kappa}{2}}+\frac{\th\kappa}{2}\right)^{-1},
\qquad \l_{2}=\left(\sqrt{\frac{\th^{2}\kappa^{2}}{4}+\frac{\r^{2}\kappa}{2}}-\frac{\th\kappa}{2}\right)^{-1}.$$ The risk-neutral drift in \eqref{X} is equal to
$$\m(t,x)=r_{0}-\frac{\s^{2}(t,x)}{2}$$ where \begin{equation}\label{71}
r_{0}=r+\frac{1}{\kappa}\log\left(1-\l_{1}^{-1}\right)\left(1+\l_{2}^{-1}\right)
=r+\frac{1}{\kappa}\log\left(1-\kappa\left(\th+\frac{\r^{2}}{2}\right)\right), \end{equation} and the expression of the characteristic operator $L$ is the same as in \eqref{LPoi} with $\n$ and $r_{0}$ as in \eqref{70} and \eqref{71} respectively. \end{example}
Our goal is to give an accurate analytic approximation of the characteristic function and, when possible, of the transition density of $X$. The general idea is to consider an approximation of the volatility coefficient $\s$. More precisely, to shorten notations we set \begin{equation}\label{a}
a(t,x)=\s^{2}(t,x) \end{equation} and we assume that $a$ is regular enough: more precisely, for a fixed $N\in\NN$, we make the following
\noindent{\bf Assumption $\text{A}_{N}$.} {\it The function $a=a(t,x)$ is continuously differentiable with respect to $x$ up to order $N$. Moreover, the function $a$ and its derivatives in $x$ are bounded and Lipschitz continuous in $x$, uniformly with respect to $t$.}
Next, we fix a basepoint $\bar{x}\in\R$ and consider the $N^{\text{th}}$-order Taylor polynomial of $a(t,x)$ about $\bar{x}$:
$$ \a_0(t)+2\sum_{n=1}^{N}\a_n(t)(x-\bar{x})^n,$$ where $\a_0(t)=a(t,\bar{x})$ and
\begin{equation}\label{43bis}
\a_n(t)=\frac{1}{2}\frac{\partial_x^na(t,\bar{x})}{n!}, \qquad n\le N.
\end{equation} Then we introduce the $n^{\text{th}}$-order approximation of $L$: \begin{equation}\label{43}
L_{n}:=L_{0}+\sum_{k=1}^{n}\a_k(t)(x-\bar{x})^k\lf\partial_{xx}-\partial_x\rg, \qquad n\le N, \end{equation} where \begin{equation}\label{42} \begin{split}
L_{0} u(t,x)&=\frac{\a_0(t)}{2} \lf\partial_{xx}u(t,x)-\partial_xu(t,x)\rg + \rle\partial_{x}u(t,x)+{\partial}_{t}u(t,x)\\
&\quad+\int_{\R}\left(u(t,x+y)-u(t,x)-\partial_{x}u(t,x)y\caratt_{\{|y|<1\}}\right)\n(dy). \end{split} \end{equation} Following the perturbation method proposed in \cite{PagliaraniPascucci2011}, and also recently used in \cite{FoschiPagliaraniPascucci2011} for the approximation of Asian options, the $n^{\text{th}}$-order approximation of the fundamental solution $\G$ of $L$ is defined by \begin{equation}\label{34}
\Gamma^{n}(t,x;T,y):=\sum_{k=0}^n G^k(t,x;T,y), \qquad t<T,\ x,y\in\R. \end{equation} The leading term $G^0$ of the expansion in \eqref{34} is the fundamental solution of $L_{0}$ and, for any $(T,y)\in\R_{+}\times\R$ and $k\le N$, the functions $G^{k}(\cdot,\cdot;T,y)$ are defined recursively in terms of the solutions of the following sequence of Cauchy problems on the strip $]0,T[\times \R$: \begin{equation}\label{2.2}
\begin{cases}
L_{0} G^k(t,x;T,y)\hspace{-9pt} &=- \sum\limits_{h=1}^k\left(L_{h}-L_{h-1}\right) G^{k-h}(t,x;T,y)\\
\hspace{-9pt} &=- \sum\limits_{h=1}^k\a_h(t)(x-\bar{x})^h \lf\partial_{xx}-\partial_x\rg
G^{k-h}(t,x;T,y),\\
\hspace{9pt}G^k(T,x;T,y) \hspace{-9pt}&= 0
.
\end{cases} \end{equation} In the sequel, when we want to specify explicitly the dependence of the approximation $\G^{n}$ on the basepoint $\xbar$, we shall use the notation \begin{equation}\label{and123}
\Gamma^{\xbar,n}(t,x;T,y)\equiv \Gamma^{n}(t,x;T,y). \end{equation}
In \Sec{Merton} we show that, in the case of a LV model with Gaussian jumps, it is possible to find {\it the explicit solutions} to the problems \eqref{2.2} by an iterative argument. When general L\'evy jumps are considered, it is still possible to compute the explicit solution of problems \eqref{2.2} {\it in the Fourier space}. Indeed, in \Sec{LV-J}, we get an expansion of the characteristic function $\p_{X^{t,x}(T)}$ having as leading term the characteristic function of the process whose Kolmogorov operator is $L_{0}$ in \eqref{42}.
We explicitly notice that, if the function $\s$ only depends on time, then {\it the approximation in \eqref{34} is exact at order zero.}
We now provide global error estimates for the approximation in the purely diffusive case. The proof is postponed to the Appendix (\Sec{app}). \begin{theorem}\label{t11} Assume the parabolicity condition \begin{equation}\label{80}
m\le \frac{a(t,x)}{2}\le M,\qquad (t,x)\in[0,T]\times\R, \end{equation} where $m,M$ are positive constants and let $\bar{x}=x$ or $\xbar=y$ in \eqref{and123}. Under Assumption A$_{N+1}$, for any $\e>0$ we have \begin{equation}\label{81}
\left|\Gamma(t,x;T,y)-\Gamma^{\xbar,N}(t,x;T,y)\right|\le
g_{N}(T-t)\bar{\Gamma}^{M+\e}(t,x;T,y), \end{equation} for $x,y\in\R$ and $t\in [0,T[$, where $\bar{\Gamma}^{M}$ is the Gaussian fundamental solution of the heat operator
$$M{\partial}_{xx}+{\partial}_{t},$$ and $g_{N}(s)=\text{O}\left(s^{\frac{N+1}{2}}\right)$ as $s\to 0^{+}$. \end{theorem}
Theorem \ref{t11} improves some known results in the literature. In particular in \cite{BenhamouGobetMiri2010b} asymptotic estimates for option prices in terms of $(T-t)^{\frac{N+1}{2}}$ are proved under a stronger assumption on the regularity of the coefficients, equivalent to Assumption A$_{3N+2}$. Here we provide error estimates for the transition density: error bounds for option prices can be easily derived from \eqref{81}. Moreover, for small $N$ it is not difficult to find the explicit expression of $g_{N}$.
Estimate \eqref{81} also justifies a time-splitting procedure which nicely adapts to our approximation operators, as shown in detail in Remark 2.7 in \cite{PagliaraniPascucci2011}.
\section{LV models with Gaussian jumps} \label{sec:Merton}
In this section we consider the SDE \eqref{X} with $J$ as in Example \ref{ex3}, namely $J$ is a compound Poisson process with Gaussian jumps. Clearly, in the particular case of a constant diffusion coefficient $\s(t,x)\equiv \s$, we have the classical Merton jump-diffusion model \cite{Merton1976}:
$$X^{\text{Merton}}(t)=\left(r_0-\frac{\s^{2}}{2}\right) t + \s W(t) + J(t),$$ with $r_{0}$ as in \eqref{30c}. We recall that the analytical approximation of this kind of models has been recently studied by Benhamou, Gobet and Miri in \cite{BenhamouGobetMiri2009} by Malliavin calculus techniques.
The expression of the pricing operator $L$ was given in \eqref{LPoi} and in this case the leading term of the approximation (cf. \eqref{42}) is equal to \begin{equation}\label{L0} \begin{split}
L_{0} v(t,x)=&\,\frac{\a_0(t)}{2} \lf\partial_{xx}v(t,x)-\partial_xv(t,x)\rg + r_{0} \partial_xv(t,x)\\
&+ \pa_tv(t,x) + \int_{\R}\left(v(t,x+y)-v(t,x)\right)\nu(d y). \end{split} \end{equation} The fundamental solution of $L_{0}$ is the transition density of a Merton process, that is \begin{equation}\label{Gamma0}
G^{0}(t,x;T,y)=e^{-\l(T-t)} \sum_{n=0}^{+\infty} \frac{(\l(T-t))^n}{n!} \Gamma_n(t,x;T,y), \end{equation} where \begin{equation}\label{36} \begin{split}
\Gamma_n(t,x;T,y)&=\frac{1}{\sqrt{2\pi \left(A(t,T)+n\d^2\right)}}\,e^{-\frac{\lf x-y+(T-t){r_{0}}-\frac12A(t,T)+nm\rg^2}{2\left(A(t,T)+n\d^2\right)}}, \\
A(t,T)&=\int_t^T\a_0(s)d s. \end{split} \end{equation} In order to determine the explicit solution to problems \eqref{2.2} for $k\ge 1$, we use some elementary properties of the functions $\lf\Gamma_n\rg_{n\geq0}$. The following lemma can be proved as Lemma 2.2 in \cite{PagliaraniPascucci2011}. \begin{lemma}\label{l1} For any $x,y,\xbar\in\R$, $t<s<T$ and $n,k\in\NN_{0}$, we have \begin{align}\label{repr}
\Gamma_{n+k}(t,x;T,y)=&\int_{\R}\Gamma_n(t,x;s,\eta) \Gamma_k(s,\eta;T,y)d \eta,\\
\pa^{k}_y\Gamma_n(t,x;T,y) =&\, (-1)^{k}\pa^{k}_x\Gamma_n(t,x;T,y),\label{d}\\
(y-\bar{x})^{k} \Gamma_n(t,x;T,y) =&\, V_{t,T,x,n}^{k}\Gamma_n(t,x;T,y),\label{V} \end{align} where $V_{t,T,x,n}$ is the operator defined by \begin{equation}\label{38} \begin{split}
V_{t,T,x,n}f(x) =& \left(x-\bar{x}+(T-t){r_{0}}-\frac12 A(t,T) +nm\right)f(x)\\
& +\lf A(t,T) +n\d^2\rg\pa_x f(x). \end{split} \end{equation} \end{lemma} Our first results are the following first and second order expansions of the transition density $\G$. \begin{theorem}[1st order expansion]\label{t1} The solution $G^1$ of the Cauchy problem \eqref{2.2} with $k=1$ is given by \begin{align}\label{11}
G^1(t,x;T,y) =& \sum_{n,k=0}^{+\infty} J^1_{n,k}(t,T,x) \Gamma_{n+k}(t,x;T,y). \end{align} where $J^1_{n,k}(t,T,x)$ is the differential operator defined by
\begin{equation}\label{13}
J^1_{n,k}(t,T,x) = e^{-\l(T-t)} \frac{\l^{n+k}}{n!k!} \int_t^T \a_1(s) (s-t)^n(T-s)^k V_{t,s,x,n} d s\,
({\partial}_{xx}-{\partial}_{x}).
\end{equation} \end{theorem}
\noindent{\it Proof.} By the standard representation formula for solutions to the non-homogeneous parabolic Cauchy problem \eqref{2.2} with null final condition, we have \begin{align*}
G^1(t,x;T,y) &=\int_t^T\int_{\R}G^0(t,x;s,\eta) \a_1(s) (\eta-\bar{x})\cdot\\
&\quad\cdot (\pa_{\eta\eta}-\pa_{\eta}) G^0(s,\eta;T,y)d \eta d
s= \intertext{(by \eqref{V})}
&= \sum_{n=0}^{+\infty} \frac{\l^{n}}{n!}\int_t^T \a_1(s) e^{-\l(s-t)} (s-t)^n\cdot\\
&\quad\cdot V_{t,s,x,n} \int_{\R}\Gamma_n(t,x;s,\eta) (\pa_{\eta\eta}-\pa_{\eta}) G^0(s,\eta;T,y)d \eta d s= \intertext{(by parts)}
&= e^{-\l(T-t)}\sum_{n,k=0}^{+\infty} \frac{\l^{n+k}}{n!k!} \int_t^T\a_1(s) (T-s)^k (s-t)^n \cdot\\
&\quad\cdot V_{t,s,x,n} \int_{\R}(\pa_{\eta\eta}+\pa_{\eta}) \Gamma_n(t,x;s,\eta) \Gamma_k(s,\eta;T,y)d \eta ds= \intertext{(by \eqref{d} and \eqref{repr})}
&= e^{-\l(T-t)} \sum_{n,k=0}^{\infty} \frac{\l^{n+k}}{n!k!} \int_t^T \a_1(s) (T-s)^k (s-t)^n V_{t,s,x,n}d s\cdot\\
&\quad\cdot (\pa_{xx}-\pa_x) \Gamma_{n+k}(t,x;T,y) \end{align*} and this proves \eqref{11}-\eqref{13}. \qquad\endproof
\begin{remark}\label{r4} A straightforward but tedious computation shows that the operator $J^1_{n,k}(t,T,x)$ can be rewritten in the more convenient form \begin{equation}\label{J1}
J^1_{n,k}(t,T,x) = \sum_{i=1}^{3}\sum_{j=0}^1 f^1_{n,k,i,j}(t,T)(x-\bar{x})^j \pa_x^i, \end{equation} for some deterministic functions $f^1_{n,k,i,j}$. \end{remark}
\begin{theorem}[2nd order expansion]\label{t2} The solution $G^2$ of the Cauchy problem \eqref{2.2} with $k=2$ is given by \begin{align}\nonumber
G^2(t,x;T,y) =& \sum_{n,h,k=0}^{+\infty} J^{2,1}_{n,h,k}(t,T,x) \Gamma_{n+h+k}(t,x;T,y) \\ \label{12}
& + \sum_{n,k=0}^{\infty } J^{2,2}_{n,k}(t,T,x) \Gamma_{n+k}(t,x;T,y), \end{align} where \begin{align*}
J^{2,1}_{n,h,k}(t,T,x) =&\, \frac{\l^{n}}{n!} \int_t^T \a_1(s) e^{-\l(s-t)} (s-t)^n V_{t,s,x,n} ({\partial}_{xx}-{\partial}_{x}) \tilde{J}^1_{n,h,k}(t,s,T,x) d s \\
J^{2,2}_{n,k}(t,T,x) =&\, e^{-\l(T-t)} \frac{\l^{n+k}}{n!k!} \int_t^T \a_2(s) (s-t)^n(T-s)^k V_{t,s,x,n}^2 d s\, ({\partial}_{xx}-{\partial}_{x}) \end{align*} and $\tilde{J}^1_{n,h,k}$ is the ``adjoint'' operator of $J^1_{h,k}$, defined by \begin{equation}\label{Jtilde}
\tilde{J}^1_{n,h,k}(t,s,T,x) = \sum_{i=1}^3\sum_{j=0}^1 f^1_{h,k,i,j}(s,T)V_{t,s,x,n}^j {\partial}_{x}^i \end{equation} with $f^1_{h,k,i,j}$ as in \eqref{J1}. Also in this case we have the alternative representation \begin{align}
J^{2,1}_{n,h,k}(t,T,x) =& \sum_{i=1}^{6}\sum_{j=0}^2f^{2,1}_{n,h,k,i,j}(t,T)(x-\bar{x})^j \pa_x^i \label{J21} \\
J^{2,2}_{n,k}(t,T,x) =& \sum_{i=1}^{6}\sum_{j=0}^2f^{2,2}_{n,k,i,j}(t,T)(x-\bar{x})^j \pa_x^i,\label{J22} \end{align} with $f^{2,1}_{n,h,k,i,j}$ and $f^{2,2}_{n,k,i,j}$ deterministic functions. \end{theorem}
\noindent{\it Proof.} We show a preliminary result: from formulae \eqref{J1} and \eqref{Jtilde} for $J^1$ and $\tilde{J}^1$ respectively, it follows that \begin{align}\nonumber
& \int_{\R}\Gamma_n(t,x;s,\eta) J^1_{h,k}(s,T,\eta) \Gamma_{h+k}(s,\eta;T,y) d \eta = \intertext{(by \eqref{d} and \eqref{V})}\nonumber
& = \int_{\R}\tilde{J}^1_{n,h,k}(s,T,x) \Gamma_n(t,x;s,\eta) \Gamma_{h+k}(s,\eta;T,y) d \eta \\ \nonumber
& = \tilde{J}^1_{n,h,k}(s,T,x) \int_{\R}\Gamma_n(t,x;s,\eta) \Gamma_{h+k}(s,\eta;T,y) d \eta = \intertext{(by \eqref{repr})} \label{15}
& = \tilde{J}^1_{n,h,k}(s,T,x) \Gamma_{n+h+k}(x,t;T,y). \end{align} Now we have
$$G^2(t,x;T,y) = I_1 + I_2, $$ where, proceeding as before, {\allowdisplaybreaks \begin{align*}
I_1 &= \int_t^T\int_{\R}G^0(t,x;s,\eta) \a_1(s) (\eta-\bar{x}) (\pa_{\eta\eta}-\pa_{\eta}) G^1(s,\eta;T,y)d \eta d s \\
&= \sum_{n,h,k=0}^{+\infty} \frac{\l^{n}}{n!} \int_t^T\a_1(s)e^{-\l(s-t)} (s-t)^n \cdot \\
&\quad \cdot V_{t,s,x,n}\int_{\R}\Gamma_n(t,x;s,\eta)(\pa_{\eta\eta}-\pa_{\eta}) J^1_{h,k}(s,T,\eta) \Gamma_{h+k}(s,\eta;T,y) d \eta d s \\
&= \sum_{n,h,k=0}^{+\infty} \frac{\l^{n}}{n!} \int_t^T\a_1(s)e^{-\l(s-t)} (s-t)^n \cdot \\
&\quad \cdot V_{t,s,x,n} (\pa_{xx}-\pa_x) \int_{\R}\Gamma_n(t,x;s,\eta) J^1_{h,k}(s,T,\eta) \Gamma_{h+k}(s,\eta;T,y) d \eta d s= \intertext{(by \eqref{15})}
&= \sum_{n,h,k=0}^{+\infty} \frac{\l^{n}}{n!} \int_t^T \a_1(s) e^{-\l(s-t)} (s-t)^n V_{t,s,x,n} (\pa_{xx}-\pa_x)
\tilde{J}^1_{n,h,k}(s,T,x)d s\cdot\\
&\quad\cdot \Gamma_{n+h+k}(x,t;T,y) \\
&= \sum_{n,h,k=0}^{+\infty} J^{2,1}_{n,h,k}(t,T,x) \Gamma_{n+h+k}(t,x;T,y) \end{align*}} and {\allowdisplaybreaks \begin{align*}
I_2 &= \int_t^T\int_{\R}G^0(t,x;s,\eta) \a_2(s) (\eta-\bar{x})^2 (\pa_{\eta\eta}-\pa_{\eta}) G^0(s,\eta;T,y)d \eta d s \\
&= e^{-\l(T-t)}\sum_{n,k=0}^{+\infty} \frac{\l^{n+k}}{n!k!} \int_t^T \a_2(s) (T-s)^k (s-t)^n \cdot\\
&\quad\cdot V_{t,s,x,n}^2 \int_{\R}\Gamma_n(t,x;s,\eta) (\pa_{\eta\eta}-\pa_{\eta}) \Gamma_k(s,\eta;T,y)d \eta d s \\
&= e^{-\l(T-t)} \sum_{n,k=0}^{+\infty} \frac{\l^{n+k}}{n!k!} \int_t^T \a_2(s) (T-s)^k (s-t)^n \cdot\\
&\quad\cdot V_{t,s,x,n}^2 (\pa_{xx}-\pa_x) \int_{\R}\Gamma_n(t,x;s,\eta) \Gamma_k(s,\eta;T,y)d \eta d s \\
&= e^{-\l(T-t)} \sum_{n,k=0}^{+\infty} \frac{\l^{n+k}}{n!k!} \int_t^T \a_2(s) (T-s)^k (s-t)^n \cdot\\
&\quad\cdot V_{t,s,x,n}^2d s\, (\pa_{xx}-\pa_x) \Gamma_{n+k}(t,x;T,y) \\
&= \sum_{n,k=0}^{+\infty} J^{2,2}_{n,k}(t,T,x) \Gamma_{n+k}(t,x;T,y). \end{align*}} This concludes the proof.\qquad\endproof
\begin{remark} Since the derivatives of a Gaussian density can be expressed in terms of Hermite polynomials, the computation of the terms of the expansion \eqref{34} is very fast. Indeed, we have \begin{equation*}
\frac{\pa_x^i\Gamma_n(t,x;T,y)}{\Gamma_n(t,x;T,y)} = \frac{(-1)^{i}h_{i,n}(t,T,x-y)}{\left(2 \left(A(t,T) +n\d^2\right)
\right)^{\frac{i}{2}}} \end{equation*} where
$$h_{i,n}(t,T,z)=\mathbf{H}_{i}\lf \frac{z+(T-t){\m_{0}}-\frac12 A(t,T) +nm}{\sqrt{2 \left(A(t,T) +n\d^2\right)}}\rg$$ and $\mathbf{H}_{i}=\mathbf{H}_{i}(x)$ denotes the Hermite polynomial of degree $i$. Thus we can rewrite the terms $\lf G^k\rg_{k=1,2}$ in \eqref{11} and \eqref{12} as follows: {\allowdisplaybreaks \begin{equation}\label{35} \begin{split}
G^1(t,x;T,y) =& \sum_{n,k=0}^{\infty}\mathbf{G}_{n,k}^1(t,x;T,y) \Gamma_{n+k}(t,x;T,y) \\
G^2(t,x;T,y) =& \sum_{n,h,k=0}^{\infty} \mathbf{G}_{n,h,k}^{2,1}(t,x;T,y) \Gamma_{n+h+k}(t,x;T,y)\\
& +\sum_{n,k=0}^{\infty} \mathbf{G}_{n,k}^{2,2}(t,x;T,y) \Gamma_{n+k}(t,x;T,y), \end{split} \end{equation} } where {\allowdisplaybreaks \begin{align*} \mathbf{G}_{n,k}^1(t,x;T,y) =&\sum_{i=1}^{3}(-1)^{i}\sum_{j=0}^1f^1_{n,k,i,j}(t,T)(x-\bar{x})^j \frac{h_{i,n+k}(t,T,x-y)}{\left(2\left( A(t,T) +(n+k)\d^2\right)\right)^{\frac{i}{2}}} \\ \mathbf{G}_{n,h,k}^{2,1}(t,x;T,y) =& \sum_{i=1}^{6}(-1)^{i}\sum_{j=0}^1f^{2,1}_{n,h,k,i,j}(t,T)(x-\bar{x})^j \frac{h_{i,n+h+k}(t,T,x-y)}{\left(2 \left(A(t,T) +(n+h+k)\d^2\right)\right)^{\frac{i}{2}}} \\ \mathbf{G}_{n,k}^{2,2}(t,x;T,y) =& \sum_{i=1}^{6}(-1)^{i}\sum_{j=0}^1f^{2,2}_{n,k,i,j}(t,T)(x-\bar{x})^j \frac{h_{i,n+k}(t,T,x-y)}{\left(2 \left(A(t,T) +(n+k)\d^2\right)\right)^{\frac{i}{2}}}. \end{align*}} In the practical implementation, we truncate the series in \eqref{Gamma0} and \eqref{35} to a finite number of terms, say $M\in\mathds{N}\cup\{0\}$. Therefore we put \begin{equation*} \begin{split}
G^0_{M}(t,x;T,y) &= e^{-\l(T-t)} \sum_{n=0}^{M} \frac{(\l(T-t))^n}{n!}
\Gamma_n(t,x;T,y),\\
G^1_{M}(t,x;T,y) &= \sum_{n,k=0}^{M}\mathbf{G}_{n,k}^1(t,x;T,y) \Gamma_{n+k}(t,x;T,y), \\
G^2_{M}(t,x;T,y) &= \sum_{n,h,k=0}^{M} \mathbf{G}_{n,h,k}^{2,1}(t,x;T,y) \Gamma_{n+h+k}(t,x;T,y)\\
&\quad +\sum_{n,k=0}^{M} \mathbf{G}_{n,k}^{2,2}(t,x;T,y) \Gamma_{n+k}(t,x;T,y), \end{split} \end{equation*} and we approximate the density $\G$ by \begin{equation}\label{33}
\Gamma^{2}_{M}(t,x;T,y):=G^0_{M}(t,x;T,y)+G^1_{M}(t,x;T,y)+G^2_{M}(t,x;T,y). \end{equation} \end{remark}
Next we denote by $C(t,S(t))$ the price at time $t<T$ of a European option with payoff function $\p$ and maturity $T$; for instance,
$\p(y)=\left(y-K\right)^{+}$ in the case of a Call option with strike $K$. From the expansion of the density in \eqref{33}, we get the following second order approximation formula. \begin{remark} We have
$$ C(t,S(t)) \approx e^{-r(T-t)} u_{M}(t,\log S(t)) $$ where \begin{align}\nonumber
u_{M}(t,x)
\nonumber
&=\int_{\R^+} \frac1S \Gamma_{ M }^2(t,x;T,\log S) \p(S) d S\\
\nonumber
&= e^{-\l(T-t)} \sum_{n=0}^{ M } \frac{(\l(T-t))^n}{n!} \mathrm{CBS}_n(t,x) \\ \nonumber
&\quad +\sum_{n,k=0}^{ M } \left(J^1_{n,k}(t,T,x)+J^{2,2}_{n,k}(t,T,x)\right) \mathrm{CBS}_{n+k}(t,x) \\ \label{uM}
&\quad + \sum_{n,h,k=0}^{ M } J^{2,1}_{n,h,k}(t,T,x) \mathrm{CBS}_{n+h+k}(t,x) \end{align} and $\mathrm{CBS}_n(t,x)$ is the BS price\footnote{Here the BS price is expressed as a function of the time $t$ and of the log-asset $x$.} under the Gaussian law $\Gamma_n(t,x;T,\cdot)$ in \eqref{36}, namely
$$\mathrm{CBS}_n(t,x) = \int_{\R^+} \frac1S \Gamma_n(t,x;T,\log S) \p(S) d S.$$ \end{remark}
\subsection{Simplified Fourier approach for LV models} \label{sec:secsimpl} Equation \eqref{X} with $J=0$ reduces to the standard SDE of a LV model. In this case we can simplify the proof of Theorems \ref{t1}-\ref{t2} by using Fourier analysis methods. Let us first notice that $L_{0}$ in \eqref{L0} becomes \begin{equation}\label{40}
L_{0}=\frac{\a_0(t)}{2} \lf\partial_{xx}-\partial_x\rg + r \partial_x + \pa_t, \end{equation} and its fundamental solution is the Gaussian density
$$G^0(t,x;T,y) =\frac{1}{\sqrt{2\pi A(t,T)}}\,e^{-\frac{\lf x-y+(T-t)r-\frac12A(t,T)\rg^2}{2A(t,T)}},$$ with $A$ as in \eqref{36}. \begin{corollary}[1st order expansion]\label{cor1} In case of $\l=0$, the solution $G^1$ in \eqref{11} is given by \begin{equation}\label{G1LV}
G^1(t,x;T,y) = J^1(t,T,x) G^0(t,x;T,y)
\end{equation} where $J^1(t,T,x)$ is the differential operator \begin{equation}\label{J1LV}
J^1(t,T,x) = \int_t^T \a_1(s) V_{t,s,x} d s\, ({\partial}_{xx}-{\partial}_{x}), \end{equation} with $V_{t,s,x}\equiv V_{t,s,x,0}$ as in \eqref{38}, that is
$$V_{t,T,x}f(x) =\left(x-\bar{x}+(T-t){r}-\frac12 A(t,T)\right)f(x)+ A(t,T)\pa_x f(x).$$ \end{corollary}
\proof Although the result follows directly from Theorem \ref{t1}, here we propose an alternative proof of formula \eqref{J1LV}. The idea is to determine the solution of the Cauchy problem \eqref{2.2} in the Fourier space, where all the computation can be carried out more easily; then, using the fact that the leading term $G^{0}$ of the expansion is a Gaussian kernel, we are able to compute explicitly the inverse Fourier transform to get back to the analytic approximation of the transition density.
Since we aim at showing the main ideas of an alternative approach, for simplicity we only consider the case of time-independent coefficients, precisely we set $\a_{0}=2$ and $r=0$. In this case we have
$$L_{0}=\partial_{xx}-\partial_x + \pa_t$$ and the related Gaussian fundamental solution is equal to
$$G^{0}(t,x;T,y)=\frac{1}{\sqrt{4\pi (T-t)}}\,e^{-\frac{\lf x-y-(T-t)\rg^2}{4(T-t)}}.$$ Now we apply the Fourier transform (in the variable $x$) to the Cauchy problem \eqref{2.2} with $k=1$ and we get \begin{equation}\label{Cpb}
\begin{cases}
{\partial}_t\hat{G}^1(t,\x;T,y) &\hspace{-8pt}= \left(\x^2-i\x\right)\hat{G}^1(t,\x;T,y)\\
&+\a_1(i{\partial}_{\x}+\bar{x}) \lf-\x^2+i\x\rg \hat{G}^0(t,\x;T,y),\\
\hat{G}^1(T,\x;T,y) &\hspace{-13pt}= 0, \qquad \x\in\R.
\end{cases} \end{equation} Notice that \begin{equation}\label{41}
\hat{G}^0(t,\x;T,y)=e^{-\x^{2}(T-t)+i\x(y+(T-t))}. \end{equation} Therefore the solution to the ordinary differential equation \eqref{Cpb} is \begin{align*}
\hat{G}^1(t,\x;T,y)&=-\a_{1}\int_{t}^{T}e^{(s-t)(-\x^{2}+i\x)}(i{\partial}_{\x}+\bar{x})
\left((-\x^{2}+i\x) \hat{G}^0(s,\x;T,y)\right)ds= \intertext{(using the identity $f(\x)(i{\partial}_{\x}+\bar{x})(g(\x))=(i{\partial}_{\x}+\bar{x})(f(\x)g(\x))-ig(\x){\partial}_{\x}f(\x)$)}
&=-\a_{1}\int_{t}^{T}(i{\partial}_{\x}+\bar{x})\left((-\x^{2}+i\x)e^{(s-t)(-\x^{2}+i\x)}
\hat{G}^0(s,\x;T,y)\right)ds\\
&\quad+i\a_{1}\int_{t}^{T}(-\x^{2}+i\x)\hat{G}^0(s,\x;T,y){\partial}_{\x}e^{(s-t)(-\x^{2}+i\x)}ds= \intertext{(by \eqref{41})}
&=-\a_{1}\int_{t}^{T}(i{\partial}_{\x}+\bar{x})\left((-\x^{2}+i\x)e^{i\x(y+(T-t))-\x^{2}(T-t)}\right)ds\\
&\quad+i\a_{1}\int_{t}^{T}(-\x^{2}+i\x)(s-t)(-2\x+i)e^{i\x(y+(T-t))-\x^{2}(T-t)}ds= \intertext{(again by \eqref{41})}
&=-\a_{1}(T-t)(i{\partial}_{\x}+\bar{x})\left((-\x^{2}+i\x)\hat{G}^0(t,\x;T,y)\right)\\
&\quad+i\a_{1}\frac{(T-t)^2}{2}(-\x^{2}+i\x)(-2\x+i)\hat{G}^0(t,\x;T,y). \end{align*} Thus, inverting the Fourier transform, we get
\begin{align*}
G^1(t,x;T,y) &=\a_{1}(T-t)(x-\bar{x})({\partial}_x^{2}-{\partial}_x)G^0(t,x;T,y) + \\
&\quad -\a_{1}\frac{(T-t)^2}{2}(-2{\partial}_x^3+3{\partial}_x^{2}-{\partial}_x)G^0(t,x;T,y) \\
&=\a_{1}\left((T-t)^2{\partial}_x^3 + \lf(x-\bar{x})(T-t)-\frac32(T-t)^2\rg{\partial}_x^2 + \right.\\
&\quad\left. +\lf-(x-\bar{x})(T-t)+\frac{(T-t)^2}{2}\rg{\partial}_x\right)G^0(t,x;T,y), \end{align*} where the operator acting on $G^0(t,x;T,y)$ is exactly the same as in \eqref{J1LV}. \qquad\endproof
\begin{remark} As in Remark \ref{r4}, operator $J^1(t,T,x)$ can also be rewritten in the form \begin{equation}\label{37}
J^1(t,T,x) = \sum_{i=1}^{3}\sum_{j=0}^1 f^1_{i,j}(t,T)(x-\bar{x})^j \pa_x^i, \end{equation} where $f^1_{i,j}$ are deterministic functions whose explicit expression can be easily derived. \end{remark} The previous argument can be used to prove the following second order expansion. \begin{corollary}[2nd order expansion]\label{cor2} In case of $\l=0$, the solution $G^2$ in \eqref{12} is given by
$$ G^2(t,x;T,y) = J^2(t,T,x)G^0(t,x;T,y) $$ where \begin{equation}\label{J2LV} \begin{split}
J^2(t,T,x) &= \int_t^T \a_1(s) V_{t,s,x} ({\partial}_{xx}-{\partial}_{x}) \tilde{J}^1(t,s,T,x) d s \\
& + \int_t^T \a_2(s) V_{t,s,x}^2 d s\, ({\partial}_{xx}-{\partial}_{x}) \end{split} \end{equation} and $\tilde{J}^1$ is the ``adjoint'' operator of $J^1$, defined by
$$ \tilde{J}^1(t,s,T,x) = \sum_{i=1}^3\sum_{j=0}^1 f^1_{i,j}(s,T)V_{t,s,x}^j {\partial}_{x}^i $$ with $f^1_{i,j}$ as in \eqref{37}. \end{corollary}
\begin{remark} In a standard LV model, the leading operator of the approximation, i.e. $L_{0}$ in \eqref{40}, has a Gaussian density $G^{0}$ and this allowed us to use the inverse Fourier transform in order to get the approximated density. This approach does not work in the general case of models with jumps because typically the explicit expression of the fundamental solution of an integro-differential equation is not available. On the other hand, for several L\'evy processes used in finance, the characteristic function is known explicitly even if the density is not. This suggests that the argument used in this section may be adapted to obtain an approximation of the characteristic function of the process instead of its density. This is what we are going to investigate in \Sec{LV-J}. \end{remark}
\section{Local L\'evy models} \label{sec:LV-J}
In this section, we provide an expansion of the characteristic function for the local L\'evy model \eqref{X}. We denote by
$$\hat{\Gamma}(t,x;T,\x)=\F\left(\Gamma(t,x;T,\cdot)\right)(\x)$$ the Fourier transform, with respect to the second spatial variable, of the transition density $\Gamma(t,x;T,\cdot)$; clearly, $\hat{\Gamma}(t,x;T,\x)$ is the characteristic function of $X^{t,x}(T)$. Then, by applying the Fourier transform to the expansion \eqref{34}, we find \begin{equation}\label{34b}
\p_{X^{t,x}(T)}(\x)\, \approx \,
\sum_{k=0}^n \hat{G}^k(t,x;T,\x). \end{equation} Now we recall that $G^{k}(t,x;T,y)$ is defined, as a function of the variables $(t,x)$, in terms of the sequence of Cauchy problems \eqref{2.2}. Since the Fourier transform in \eqref{34b} is performed with respect to the variable $y$, in order to take advantage of such a transformation it seems natural to characterize $G^{k}(t,x;T,y)$ as a solution of the {\it adjoint operator} in the dual variables $(T,y)$.
To be more specific, we recall the definition of adjoint operator. Let $L$ be the operator in \eqref{L}; then its adjoint operator $\tilde{L}$ satisfies (actually, it is defined by) the identity
$$\int_{\R^{2}}u(t,x)Lv(t,x)dxdt=\int_{\R^{2}}v(t,x)\tilde{L}u(t,x)dxdt$$ for all $u,v\in C_{0}^{\infty}$. More explicitly, by recalling notation \eqref{a}, we have \begin{align*}
\tilde{L}^{(T,y)}u(T,y)&=\frac{a(T,y)}{2}{\partial}_{yy}u(T,y)+b(T,y){\partial}_y u(T,y)\\
&\quad-{\partial}_Tu(T,y)+c(T,y)u(T,y)\\
&\quad +\int_{\R}\left(u(T,y+z)-u(T,y)-z{\partial}_y u(T,y)\caratt_{\{|z|<1\}}\right)\bar{\n}(dz), \end{align*} where
$$b(T,y)={\partial}_ya(T,y)-\left(\bar{r}-\frac{a(T,y)}{2}\right),\qquad c(T,y)=\frac12({\partial}_{yy}+{\partial}_y) a(T,y),$$ and $\bar{\n}$ is the L\'evy measure with reverted jumps, i.e. $\bar{\n}(dx)=\n(-dx)$. Here the superscript in $\tilde{L}^{(T,y)}$ is indicative of the fact that the operator $\tilde{L}$ is acting in the variables $(T,y)$.
By a classical result (cf., for instance, \cite{GarroniMenaldi1992}) the fundamental solution $\G(t,x;T,y)$ of $L$ is also a solution of $\tilde{L}$ in the dual variables, that is \begin{equation}\label{Lad}
\tilde{L}^{(T,y)}\G(t,x;T,y)=0,\qquad t<T,\ x,y\in\R. \end{equation} Going back to approximation \eqref{34b}, the idea is to consider the series of the dual Cauchy problems of \eqref{2.2} in order to solve them by Fourier-transforming in the variable $y$ and finally get an approximation of $\p_{X^{t,x}(T)}$.
For sake of simplicity, from now on we only consider the case of time-independent coefficients: the general case can be treated in a completely analogous way. First of all, we consider the integro-differential operator $L_0$ in \eqref{42}, which in this case becomes \begin{equation}\label{L0F} \begin{split}
L_0^{(t,x)}u(t,x)&=\frac{\a_0}{2}({\partial}_{xx}-{\partial}_x)u(t,x)+\bar{r}{\partial}_xu(t,x)+{\partial}_tu(t,x)\\
&\quad+\int_{\R}\left(u(t,x+y)-u(t,x)-y{\partial}_x u(t,x)\caratt_{\{|y|<1\}}\right)\n(dy), \end{split} \end{equation} and its adjoint operator \begin{equation}\label{Ltilde0} \begin{split}
\tilde{L}_0^{(T,y)}u(T,y)&=\frac{\a_0}{2}({\partial}_{yy}+{\partial}_y)u(T,y)-\bar{r}{\partial}_y u(T,y)-{\partial}_T u(T,y)\\
&\quad +\int_{\R}\left(u(T,y+z)-u(T,y)-z{\partial}_y u(T,y)\caratt_{\{|z|<1\}}\right)\bar{\n}(dz). \end{split} \end{equation} By \eqref{Lad}, for any $(t,x)\in\R^{2}$, the fundamental solution $G^{0}(t,x;T,y)$ of $L_0$ solves the dual Cauchy problem \begin{equation}\label{50}
\begin{cases}
\tilde{L}_0^{(T,y)}G^{0}(t,x;T,y) = 0,\qquad &T>t,\ y\in\R,\\
G^{0}(t,x;t,\cdot) = \d_x.
\end{cases} \end{equation} It is remarkable that a similar result holds for the higher order terms of the approximation \eqref{34b}. Indeed, let us denote by $L_{n}$ the $n^{\text{th}}$ order approximation of $L$ in \eqref{43}: \begin{equation}\label{43b}
L_{n}=L_{0}+\sum_{k=1}^{n}\a_k(x-\bar{x})^k\lf\partial_{xx}-\partial_x\rg \end{equation} Then we have the following result. \begin{theorem} For any $k\ge 1$ and $(t,x)\in\R^{2}$, the function $G^k(t,x;\cdot,\cdot)$ in \eqref{2.2} is the solution of the following dual Cauchy problem on $]t,+\infty[\times \R$ \begin{equation}\label{51}
\begin{cases}
\tilde{L}^{(T,y)}_{0} G^k(t,x;T,y)=- \sum\limits_{h=1}^k\left(\tilde{L}^{(T,y)}_{h}-\tilde{L}^{(T,y)}_{h-1}\right)
G^{k-h}(t,x;T,y),\\
G^k(t,x;t,y)= 0, \qquad y\in\R,
\end{cases} \end{equation} where \begin{align*}
\tilde{L}^{(T,y)}_{h}-\tilde{L}^{(T,y)}_{h-1}&=\a_h(y-\bar{x})^{h-2}
\Big((y-\bar{x})^{2}\partial_{yy}+(y-\bar{x})\left(2h+(y-\bar{x})\right)\partial_y\\
&\quad +h\left(h-1+y-\bar{x}\right)\Big). \end{align*} \end{theorem} \proof By the standard representation formula for the solutions of the {\it backward} parabolic Cauchy problem \eqref{2.2}, for $k\geq 1$ we have \begin{equation}\label{999}
G^k(t,x;T,y)=\sum_{h=1}^{k}\int_{t}^{T}\int_{\R}G^{0}(t,x;s,\y)M^{(s,\y)}_{h}G^{k-h}(s,\y;T,y)d\y ds, \end{equation} where to shorten notation we have set \begin{equation*} M^{(t,x)}_{h}=L^{(t,x)}_{h}-L^{(t,x)}_{h-1}. \end{equation*} By \eqref{50} and since \begin{equation*}
\tilde{M}^{(T,y)}_{h}=\tilde{L}^{(T,y)}_{h}-\tilde{L}^{(T,y)}_{h-1}. \end{equation*} the assertion is equivalent to \begin{equation}\label{998}
G^k(t,x;T,y)=\sum_{h=1}^{k}\int_{t}^{T}\int_{\R}G^{0}(s,\y;T,y)\tilde{M}^{(s,\y)}_{h}G^{k-h}(t,x;s,\y)d\y ds, \end{equation} where here we have used the representation formula for the solutions of the {\it forward} Cauchy problem \eqref{51} with $k\ge 1$.
We proceed by induction and first prove \eqref{998} for $k=1$. By \eqref{999} we have \begin{align*}
G^1(t,x;T,y)&=\int_{t}^{T}\int_{\R}G^{0}(t,x;s,\y)M^{(s,\y)}_{1}G^{0}(s,\y;T,y)d\y ds\\
&=\int_{t}^{T}\int_{\R}G^{0}(s,\y;T,y)\tilde{M}^{(s,\y)}_{1}G^{0}(t,x;s,\y)d\y ds, \end{align*} and this proves \eqref{998} for $k=1$.
Next we assume that \eqref{998} holds for a generic $k> 1$ and we prove the thesis for $k+1$. Again, by \eqref{999} we have {\allowdisplaybreaks \begin{align*}
G^{k+1}(t,x;T,y)&=\sum_{j=1}^{k+1}\int_{t}^{T}\int_{\R}G^{0}(t,x;s,\y)M^{(s,\y)}_{j}G^{k+1-j}(s,\y;T,y)d\y ds\\
&=\int_{t}^{T}\int_{\R}G^{0}(t,x;s,\y)M^{(s,\y)}_{k+1}G^{0}(s,\y;T,y)d\y ds\\
&\quad+\sum_{j=1}^{k} \int_{t}^{T}\int_{\R}G^{0}(t,x;s,\y)M^{(s,\y)}_{j}G^{k+1-j}(s,\y;T,y)d\y ds= \end{align*}} (by the inductive hypothesis) {\allowdisplaybreaks \begin{align*}
&=\int_{t}^{T}\int_{\R}G^{0}(t,x;s,\y)M^{(s,\y)}_{k+1}G^{0}(s,\y;T,y)d\y ds\\
&\quad+\sum_{j=1}^{k}\int_{t}^{T}\int_{\R}G^{0}(t,x;s,\y)M^{(s,\y)}_{j}\cdot\\
&\quad\cdot\sum_{h=1}^{k+1-j}\int_{s}^{T}\int_{\R}G^{0}(\t,\z;T,y)\tilde{M}^{(\t ,\z)}_{h} G^{k+1-j-h}(s,\y;\t,\z)d\z d\t d\y ds\\
&=\int_{t}^{T}\int_{\R}G^{0}(t,x;s,\y)M^{(s,\y)}_{k+1}G^{0}(s,\y;T,y)ds d\y\\
&\quad+\sum_{h=1}^{k}\sum_{j=1}^{k+1-h}\int_{t}^{T}\int_{t}^{\t}\int_{\R^2}G^{0}(t,x;s,\y)G^{0}(\t,\z;T,y)\cdot\\
&\quad\cdot M^{(s,\y)}_{j}\tilde{M}^{(\t ,\z)}_{h} G^{k+1-j-h}(s,\y;\t,\z)d\y d\z ds d\t \\
&=\int_{t}^{T}\int_{\R}G^{0}(s,\y;T,y)\tilde{M}^{(s,\y)}_{k+1}G^{0}(t,x;s,\y)ds d\y\\
&\quad+\sum_{h=1}^{k}\int_{t}^{T}\int_{\R}G^{0}(\t,\z;T,y)\tilde{M}^{(\t ,\z)}_{h} \cdot\\
&\quad\cdot\left(\sum_{j=1}^{k+1-h}\int_{t}^{\t}\int_{\R}G^{0}(t,x;s,\y)M^{(s,\y)}_{j} G^{k+1-h-j}(s,\y;\t,\z)d\y ds\right) d\z d\t=
\intertext{(again by \eqref{999})}
&=\int_{t}^{T}\int_{\R}G^{0}(t,\y;T,y)\tilde{M}^{(s,\y)}_{k+1}G^{0}(t,x;s,\y)ds d\y\\
&\quad+\sum_{h=1}^{k}\int_{t}^{T}\int_{\R}G^{0}(\t,\z;T,y)\tilde{M}^{(\t ,\z)}_{h} G^{k+1-h}(t,x;\t,\z) d\z d\t \\
&=\sum_{h=1}^{k+1}\int_{t}^{T}\int_{\R}G^{0}(\t,\z;T,y)\tilde{M}^{(\t ,\z)}_{h} G^{k+1-h}(t,x;\t,\z) d\z d\t. \qquad \end{align*}} \endproof
Next we solve problems \eqref{50}-\eqref{51} by applying the Fourier transform in the variable $y$ and using the identity \begin{equation}\label{52}
\F_{y}\left(\tilde{L}_0^{(T,y)}u(T,y)\right)(\x)=\psi(\x)\hat{u}(T,\x)-\partial_{T}\hat{u}(T,\x), \end{equation} where \begin{equation}\label{53}
\psi(\x)=-\frac{\a_0}{2}(\x^2+i\x)+i\bar{r}\x+\int_{\R}\left(e^{iz\x}-1-iz\x\caratt_{\{|z|<1\}}\right)\n(dz). \end{equation} We remark explicitly that $\psi$ is the characteristic exponent of the L\'evy process \begin{equation}\label{60}
d X^{0}(t)=\left(\bar{r}-\frac{\a_0}{2}\right)d t+ \sqrt{\a_0} d W(t) +d J(t), \end{equation} whose Kolmogorov operator is $L^{0}$ in \eqref{L0F}. Then: \begin{enumerate}[(i)]
\item from \eqref{50} we obtain the ordinary differential equation \begin{equation}\label{CpbF0}
\begin{cases}
\partial_T\hat{G}^{0}(t,x;T,\xi)=\psi(\x)\hat{G}^{0}(t,x;T,\xi),\qquad T>t,\\
\hat{G}^{0}(t,x;t,\xi) = e^{i\x x}.
\end{cases} \end{equation} with solution \begin{equation}\label{53b}
\hat{G}^{0}(t,x;T,\xi)=e^{i\x x+(T-t)\psi(\x)} \end{equation} which is the $0^{\text{th}}$ order approximation of the characteristic function $\p_{X^{t,x}(T)}$.
\item from \eqref{51} with $k=1$, we have \begin{equation*}
\begin{cases}
{\partial}_T\hat{G}^1(t,x;T,\x) \hspace{-9pt} &= \psi(\x)\hat{G}^1(t,x;T,\x)\\
\hspace{-9pt} &\quad+ \a_1\lf (i{\partial}_{\x}+\bar x)(\x^2+i\x)-2i\x+1 \rg \hat{G}^0(t,x;T,\xi) \\
\hspace{15pt}\hat{G}^1(t,x;t,\x)\hspace{-9pt} & = 0,
\end{cases} \end{equation*} with solution
$$\hat{G}^1(t,x;T,\x) = \int_t^Te^{\psi(\x)(T-s)}\a_1 \lf (i{\partial}_{\x}+\bar x)(\x^2+i\x)-2i\x+1 \rg \hat{G}^0(t,x;s,\xi)
ds=$$ (by \eqref{53b}) \begin{align}\nonumber
&= -e^{ix\x+\psi(\x)(T-t)}\a_{1} \int_t^T (\xi^{2}+i\x ) \left(x-\bar{x}-i (s-t)
\psi'(\xi)\right)ds\\ \label{54}
&=-\hat{G}^0(t,x;T,\xi)\a_1 (T-t)(\xi^{2}+i\x) \left(x- \bar{x}-\frac{i}{2}(T-t)
\psi'(\xi)\right), \end{align}
which is the first order term in the expansion \eqref{34b}.
\item regarding \eqref{51} with $k=2$, a straightforward computation based on analogous arguments
shows that the second order term in the expansion \eqref{34b} is given by \begin{equation}\label{55}
\hat{G}^2(t,x;T,\xi)=\hat{G}^0(t,x;T,\xi)\sum_{j=0}^{2}g_{j}(T-t,\x)(x-\bar{x})^{j} \end{equation} where {\allowdisplaybreaks \begin{align*}
g_{0}(s,\x)&=\frac{1}{2}s^{2} \a_2 \xi (i+\xi ) \psi''(\xi)\\
&\quad -\frac{1}{6}s^{3} \xi (i+\xi ) \psi''(\xi)\left(\a_1^2 (i+2 \xi )-2 \a_2 \psi''(\xi)+\a_1^2 \xi (i+\xi )\right)\\
&\quad -\frac{1}{8}s^{4} \a_1^2 \xi^2 (i+\xi )^2 \psi''(\xi)^2,\\
g_{1}(s,\x)&= \frac{1}{2}s^{2} \xi (i+\xi ) \left(\a_1^2 (1-2 i \xi )+2 i \a_2 \psi''(\xi)\right)\\
&\quad -\frac{1}{2}s^{3} i \a_1^2 \xi ^2 (i+\xi )^2 \psi''(\xi),\\
g_{2}(s,\x)&=-\a_2 s\xi (i+\xi )+ \frac{1}{2}s^{2} \a_1^2 \xi ^2 (i+\xi )^2. \end{align*}} \end{enumerate} Plugging \eqref{53b}-\eqref{54}-\eqref{55} into \eqref{34b}, we finally get the second order approximation of the characteristic function of $X$. In Subsection \ref{HOA}, we also provide the expression of $\hat{G}^k(t,x;T,\xi)$ for $k=3,4$, appearing in the $4^{\text{th}}$ order approximation. \begin{remark} The basepoint $\bar{x}$ is a parameter which can be freely chosen in order to sharpen the accuracy of the approximation. In general, the simplest choice $\bar{x}=x$ seems to be sufficient to get very accurate results. \end{remark} \begin{remark} To overcome the use of the adjoint operators, it would be interesting to investigate an alternative approach to the approximation of the characteristic function based of the following remarkable symmetry relation valid for time-homogeneous diffusions \begin{equation}\label{56}
m(x)\G(0,x;t,y)=m(y)\G(0,y;t,x) \end{equation} where $m$ is the so-called density of the speed measure
$$m(x)=\frac{2}{\s^{2}(x)}\exp\left(\int_{1}^{x}\left(\frac{2r}{\s^{2}(z)}-1\right)dz\right).$$ Relation \eqref{56} is stated in \cite{ItoMcKean1974} and a complete proof can be found in \cite{EkstromTysk2011}. \end{remark}
For completeness, we close this section by stating an integral pricing formula for European options proved by Lewis \cite{Lewis2001}; the formula is given in terms of the characteristic function of the underlying log-price process. Formula below (and other Fourier-inversion methods such as the standard, fractional FFT algorithm or the recent COS method \cite{Oosterlee2008}) can be combined with the expansion \eqref{34b} to price and hedge efficiently hybrid LV models with L\'evy jumps.
We consider a risky asset $S(t)=e^{X(t)}$ where $X$ is the process whose risk-neutral dynamics under a martingale measure $Q$ is given by \eqref{X}. We denote by
$H(t,S(t))$ the price at time $t<T$, of a European option with underlying asset $S$, maturity $T$ and payoff $f=f(x)$ (given as a function of the log-price): to fix ideas, for a Call option with strike $K$ we have
$$f^{\text{Call}}(x)=\left(e^{x}-K\right)^{+}.$$ The following theorem is a classical result which can be found in several textbooks (see, for instance, \cite{Pascucci2011book}). \begin{theorem}\label{t10}
Let
$$f_{\g}(x)=e^{-\g x}f(x)$$
and assume that there exists $\g\in\R$ such that \begin{enumerate}
\item[{\it i)}] $f_{\g},\hat{f}_{\g}\in L^{1}(\R)$;
\item[{\it ii)}] $E^{Q}\left[S(T)^{\g}\right]$ is finite. \end{enumerate} Then, the following pricing formula holds:
$$H(t,S(t))=\frac{e^{-r(T-t)}}{\pi}\int_{0}^{\infty}\hat{f}(\x+i\g)\p_{X^{t,\log S(t)}(T)}(-(\x+i\g))d\x.$$ \end{theorem} For example, $f^{\text{Call}}$ verifies the assumptions of Theorem \ref{t10} for any $\g>1$ and we have
$$\hat{f}^{\text{Call}}(\x+i\g)=\frac{K^{1-\g}e^{i\x \log K}}{\left(i\x-\g\right)\left(i\x-\g+1\right)}.$$ Other examples of typical payoff functions and the related Greeks can be found in \cite{Pascucci2011book}.
\subsection{High order approximations}\label{HOA} The analysis of \Sec{LV-J} can be carried out to get approximations of arbitrarily high order. Below we give the more accurate (but more complicated) formulae up to the $4^{\text{th}}$ order that we used in the numerical section. In particular we give the expression of $\hat{G}^k(t,x;T,\xi)$ in \eqref{34b} for $k=3,4$. For simplicity, we only consider the case of time-homogeneous coefficients and $\xbar=x$.
We have
$$\hat{G}^3(t,x;T,\xi)=\hat{G}^0(t,x;T,\xi)\sum_{j=3}^{7}g_{j}(\x)(T-t)^{j}$$
where {\allowdisplaybreaks \begin{align*}
g_3(\xi)&= \frac{1}{2} \a_3 (1-i \xi ) \xi \psi^{(3)}(\x),\\
g_4(\xi)&= \frac{1}{6} i \xi (i+\xi ) \bigg(2 \psi'(\x) \left(\a_1 \a_2-3 \a_3
\psi''(\x)\right)\\
&\quad+\a_1 \a_2 \left(3(i+2 \xi ) \psi''(\x)+2 \xi (i+\xi ) \psi^{(3)}(\x)\right)\bigg),\\
g_5(\xi)&= \frac{1}{24} (1-i \xi ) \xi \Big(-8 \a_1 \a_2 (i+2 \xi ) \psi'(\x)^2+6 \a_3
\psi'(\x)^3\\
&\quad+\a_1 \psi'(\x) \left(\a_1^2 (-1+6 \xi (i+\xi ))-16 \a_2 \xi (i+\xi )
\psi''(\x)\right)\\
&\quad+\a_1^3 \xi (i+\xi ) \left(3( i+2 \xi ) \psi''(\x)+\xi (i+\xi )
\psi^{(3)}(\x)\right)\Big),\\
g_6(\xi)&= -\frac{1}{12} i \a_1 \xi ^2 (i+\xi )^2 \psi'(\x) \Big(\a_1^2 (i+2 \xi ) \psi'(\x)\\
&\quad-2 \a_2 \psi'(\x)^2+\a_1^2 \xi (i+\xi ) \psi''(\x)\Big),\\
g_7(\xi)&= -\frac{1}{48} i \left(\a_1 \xi (i+\xi ) \psi'(\x)\right)^3. \end{align*}} Moreover, we have
$$\hat{G}^4(t,x;T,\xi)=\hat{G}^0(t,x;T,\xi)\sum_{j=3}^{9}g_{j}(\x)(T-t)^{j}$$ where {\allowdisplaybreaks \begin{align*}
g_3(\xi)&= -\frac{1}{2} \a_{4} \xi (i+\xi ) \psi^{(4)}(\x),\\
g_4(\xi)&= \frac{1}{6} \xi (i+\xi ) \Big(2 \psi''(\x) \left(\a_2^2+3 \a_1 \a_3-3 \a_4
\psi''(\x)\right)\\
&\quad +2 \left(\left(\a_2^2+2 \a_1 \a_3\right) (i+2 \xi )-4 \a_4
\psi'(\x)\right) \psi^{(3)}(\x)\\
&\quad +\left(\a_2^2+2 \a_1 \a_3\right) \xi (i+\xi )
\psi^{(4)}(\x)\Big),\\
g_5(\xi)&= -\frac{1}{24} \xi (i+\xi ) \Big(\a_1^2 \a_2 (-7+44 \xi (i+\xi
)) \psi''(\x)\\
&\quad -\left(7 \a_2^2+15 \a_1 \a_3\right) \xi (i+\xi ) \psi''(\x)^2\\
&\quad -2
\psi'(\x)^2 \left(2 \a_2^2+9 \a_1 \a_3-18 \a_4 \psi''(\x)\right)\\
&\quad +\psi'(\x) \Big((i+2
\xi ) \left(8 \a_1^2 \a_2-\left(14 \a_2^2+33 \a_1 \a_3\right) \psi''(\x)\right)\\
&\quad-\left(10
\a_2^2+21 \a_1 \a_3\right) \xi (i+\xi ) \psi^{(3)}(\x)\Big)\\
&\quad +3 \a_1^2 \a_2 \xi (i+\xi )
\left(4(i+2 \xi ) \psi^{(3)}(\x)+\xi (i+\xi ) \psi^{(4)}(\x)\right)\Big),\\
g_6(\xi)&= \frac{1}{120} \xi (i+\xi ) \Big(2 \left(8 \a_2^2+21 \a_1 \a_3\right) (i+2 \xi )
\psi'(\x)^3-24 \a_4 \psi'(\x)^4\\
&\quad +2 \psi'(\x)^2 \left(\a_1^2 \a_2 (11-70 \xi (i+\xi
))+\left(26 \a_2^2+57 \a_1 \a_3\right) \xi (i+\xi ) \psi''(\x)\right)\\
&\quad +\a_1^2 \psi'(\x)
\Big((i+2 \xi ) \left(\a_1^2 (-1+12 \xi (i+\xi ))-112 \a_2 \xi (i+\xi ) \psi''(\x)\right)\\
&\quad -38 \a_2 \xi ^2 (i+\xi )^2 \psi^{(3)}(\x)\Big)+\a_1^2 \xi (i+\xi ) \Big(\a_1^2 (-7+36 \xi (i+\xi
)) \psi''(\x)\\
&\quad -26 \a_2 \xi (i+\xi ) \psi''(\x)^2+\a_1^2 \xi (i+\xi ) \left(6 (i+2 \xi )
\psi^{(3)}(\x)+\xi (i+\xi ) \psi^{4}(\x)\right)\Big)\Big),\\
g_7(\xi)&= \frac{1}{144} \xi ^2 (i+\xi )^2 \Big(-32 \a_1^2 \a_2 (i+2 \xi ) \psi'(\x)^3+2
\left(4 \a_2^2+9 \a_1 \a_3\right) \psi'(\x)^4\\
&\quad +2 \a_1^4 \xi ^2 (i+\xi )^2 \psi''(\x)^2\\
&\quad+\a_1^2 \psi'(\x)^2 \left(\a_1^2 (-5+26 \xi (i+\xi ))-47 \a_2 \xi (i+\xi )
\psi''(\x)\right)\\
&\quad +\a_1^4 \xi (i+\xi ) \psi'(\x) \left(13 (i+2 \xi ) \psi''(\x)+3 \xi
(i+\xi ) \psi^{(3)}(\x)\right)\Big),\\
g_8(\xi)&= \frac{1}{48} \a_1^2 \xi ^3 (i+\xi )^3 \psi'(\x)^2 \Big(\a_1^2 (i+2 \xi )
\psi'(\x)\\
&\quad -2 \a_2 \psi'(\x)^2+\a_1^2 \xi (i+\xi ) \psi''(\x)\Big),\\
g_9(\xi)&= \frac{1}{384} \a_1^4 \xi ^4 (i+\xi )^4 \psi'(\x)^4.
\end{align*}
\section{Numerical tests} \label{sec:numeric}
In this section our approximation formulae \eqref{34b} are tested and compared with a standard Monte Carlo method. We consider up to the $4^{\text{th}}$ order expansion (i.e. $n=4$ in \eqref{34b}) even if in most cases the $2^{\text{nd}}$ order seems to be sufficient to get very accurate results. We analyze the case of a constant elasticity of variance (CEV) volatility function with L\'evy jumps of Gaussian or Variance-Gamma type. Thus, we consider the log-price dynamics \eqref{X} with
$$\sigma(t,x)=\sigma_{0} e^{(\b-1)x},\quad\b\in[0,1],\ \s_{0}>0,$$ and $J$ as in Examples \ref{ex3} and \ref{ex4} respectively. In our experiments we assume the following values for the parameters: \begin{enumerate}[(i)]
\item $S_0=1$ (initial stock price);
\item $r=5\%$ (risk-free rate)
\item $\s_{0}=20\%$ (CEV volatility parameter);
\item $\b=\frac{1}{2}$ (CEV exponent). \end{enumerate} In order to present realistic tests, we allow the range of strikes to vary over the maturities; specifically, we consider extreme values of the strikes where Call prices are of the order of $10^{-3}S_{0}$, that is we consider deep-out-of-the-money options which are very close to be worthless. To compute the reference values, we use an Euler-\MC method with $10$ millions simulations and $250$ time-steps per year.
\subsection{Tests under CEV-Merton dynamics}
In the CEV-Merton model of Example \ref{ex3}, we consider the following set of parameters: \begin{enumerate}[(i)]
\item $\l=30\%$ (jump intensity);
\item $m=-10\%$ (average jump size);
\item $\d=40\%$ (jump volatility). \end{enumerate} In Table \ref{tab:MertonCEV}, we give detailed numerical results, in terms of prices and implied volatilities, about the accuracy of our fourth order formula (PPR-$4^{\text{th}}$) compared with the bounds of the Monte Carlo $95\%$-confidence interval.
\begin{table}[htb]
\centering
{\footnotesize
\begin{tabular}{c|c|c l@{ -- }l |c l@{ -- }l}
\hline\hline
& & \multicolumn{3}{c}{Call prices} & \multicolumn{3}{c}{Implied volatility (\%)}
\\[1ex]
$T$&$K$ & PPR-$4^{\text{th}}$ & \multicolumn{2}{c}{MC-$95\%$ c.i.}
& PPR-$4^{\text{th}}$ & \multicolumn{2}{c}{MC-$95\%$ c.i.}
\\[1ex]
\hline \hline
& 0.5 & 0.50669 & 0.50648 & 0.50666 & 57.81 & 54.03 & 57.31
\\ &0.75 & 0.26324 & 0.26304 & 0.26321 & 37.91 & 37.48 & 37.84
\\ 0.25 & 1 & 0.05515 & 0.05501 & 0.05514 & 24.58 & 24.50 & 24.57
\\ & 1.25 & 0.00645 & 0.00637 & 0.00645 & 30.48 & 30.39 & 30.49
\\ & 1.5 & 0.00305 & 0.00300 & 0.00306 & 42.05 & 41.93 & 42.07
\\ \hline & 0.5 & 0.52720 & 0.52700 & 0.52736 & 38.82 & 38.35 & 39.20
\\ & 1 & 0.13114 & 0.13097 & 0.13125 & 27.06 & 27.01 & 27.08
\\ 1 & 1.5 & 0.01840 & 0.01836 & 0.01852 & 29.04 & 29.03 & 29.10
\\ & 2 & 0.00566 & 0.00566 & 0.00575 & 34.45 & 34.45 & 34.55
\\ & 2.5 & 0.00209 & 0.00208 & 0.00214 & 37.65 & 37.62 & 37.77
\\ \hline & 0.5 & 0.72942 & 0.72920 & 0.73045 & 32.88 & 32.81 & 33.21
\\ & 1 & 0.52316 & 0.52293 & 0.52411 & 29.67 & 29.64 & 29.80
\\ 10 & 5 & 0.05625 & 0.05604 & 0.05664 & 26.12 & 26.09 & 26.17
\\ & 7.5 & 0.02267 & 0.02246 & 0.02290 & 26.34 & 26.30 & 26.39
\\ & 10 & 0.01241 & 0.01091 & 0.01126 & 27.05 & 26.54 & 26.66
\\ \hline \hline \end{tabular}
\caption{Call prices and implied volatilities in the CEV-Merton model for the fourth order formula (PPR-$4^{\text{th}}$) and
the Monte Carlo (MC-$95\%$) with 10 millions simulations using Euler scheme with 250 time steps per year, expressed as a function of strikes at the expiry T = 3M, 1Y, 10Y. Parameters: $S_0=1$ (initial stock price), $r=5\%$ (risk-free rate), $\s_{0}=20\%$ (CEV volatility parameter), $\b=\frac{1}{2}$ (CEV exponent), $\l=30\%$ (jump intensity), $m=-10\%$ (average jump size), $\d=40\%$ (jump volatility). }
\label{tab:MertonCEV} } \end{table}
Figures \ref{fig1}, \ref{fig2} and \ref{fig3} show the performance of the $1^{\text{st}}$, $2^{\text{nd}}$ and $3^{\text{rd}}$ approximations against the Monte Carlo $95\%$ and $99\%$ confidence intervals, marked in dark and light gray respectively. In particular, Figure \ref{fig1} shows the cross-sections of absolute (left) and relative (right) errors
for the price of a Call with short-term maturity $T=0.25$ and strike $K$ ranging from $0.5$ to $1.5$. The relative error is defined as
$$\frac{\text{Call}^{\text{approx}}-\text{Call}^{\text{MC}}}{\text{Call}^{\text{MC}}}$$ where $\text{Call}^{\text{approx}}$ and $\text{Call}^{\text{MC}}$ are the approximated and Monte Carlo prices respectively. In Figure \ref{fig2} we repeat the test for the medium-term maturity $T=1$ and the strike $K$ ranging from $0.5$ to $2.5$. Finally in Figure \ref{fig3} we consider the long-term maturity $T=10$ and the strike $K$ ranging from $0.5$ to $4$.
Other experiments that are not reported here, show that the $2^{\text{nd}}$ order expansion \eqref{33}, which is valid only in the case of Gaussian jumps, gives the same results as formula \eqref{34b} with $n=2$, at least if the truncation index $M$ is suitable large, namely $M\ge 8$ under standard parameter regimes. For this reason we have only used formula \eqref{34b} for our tests.
\subsection{Tests under CEV-Variance-Gamma dynamics} In this subsection we repeat the previous tests in the case of the CEV-Variance-Gamma model. Specifically, we consider the following set of parameters: \begin{enumerate}[(i)]
\item $\kappa=15\%$ (variance of the Gamma subordinator);
\item $\th=-10\%$ (drift of the Brownian motion);
\item $\s=20\%$ (volatility of the Brownian motion). \end{enumerate}
Analogously to Table \ref{tab:MertonCEV}, in Table \ref{tab:VGCEV} we compare our Call price formulas with a high-precision Monte Carlo approximation (with $10^{7}$ simulations and $250$ time-steps per year) for several strikes and maturities. For both the price and the implied volatility, we report our $4^{\text{th}}$ order approximation (PPR $4^{\text{th}}$) and the boundaries of the Monte Carlo $95\%$-confidence interval. \begin{table}[htb]
\centering
{\footnotesize
\begin{tabular}{c|c|c l@{ -- }l |c l@{ -- }l}
\hline\hline
& & \multicolumn{3}{c}{Call prices} & \multicolumn{3}{c}{Implied volatility (\%)}
\\[1ex]
$T$&$K$ & PPR $4^{\text{th}}$ & \multicolumn{2}{c}{MC 95\% c.i.}
& PPR $4^{\text{th}}$ & \multicolumn{2}{c}{MC 95\% c.i.}
\\[1ex]
\hline \hline
& 0.8 & 0.23708 & 0.23704 & 0.23722 & 55.61 & 55.57 & 55.72
\\ & 0.9 & 0.15489 & 0.15482 & 0.15497 & 47.09 & 47.05 & 47.14
\\ 0.25 & 1 & 0.08413 & 0.08403 & 0.08415 & 39.29 & 39.24 & 39.30
\\ & 1.1 & 0.03436 & 0.03426 & 0.03433 & 33.27 & 33.22 & 33.26
\\ & 1.2 & 0.00968 & 0.00961 & 0.00965 & 29.28 & 29.21 & 29.25
\\ \hline & 0.5 & 0.54643 & 0.54630 & 0.54679 & 61.02 & 60.91 & 61.30
\\ & 0.75 & 0.35456 & 0.35438 & 0.35479 & 52.35 & 52.28 & 52.44
\\ 1 & 1 & 0.20071 & 0.20049 & 0.20082 & 45.42 & 45.36 & 45.45
\\ & 1.5 & 0.03394 & 0.03374 & 0.03387 & 35.16 & 35.09 & 35.14
\\ & 2 & 0.00188 & 0.00185 & 0.00188 & 29.08 & 29.01 & 29.07
\\ \hline & 0.5 & 0.80150 & 0.80279 & 0.80502 & 52.60 & 52.95 & 53.53
\\ & 1 & 0.66691 & 0.66775 & 0.66990 & 49.09 & 49.21 & 49.52
\\ 10 & 5 & 0.22948 & 0.22836 & 0.22986 & 42.02 & 41.93 & 42.05
\\ & 7.5 & 0.13680 & 0.13497 & 0.13618 & 40.34 & 40.17 & 40.29
\\ & 10 & 0.08664 & 0.08418 & 0.08518 & 39.21 & 38.93 & 39.05
\end{tabular}
\caption{Call prices and implied volatilities in the CEV-Variance-Gamma model for the fourth order formula (PPR-$4^{\text{th}}$) and
the Monte Carlo (MC-$95\%$) with 10 millions simulations using Euler scheme with 250 time steps per year, expressed as a function of strikes at the expiry T = 3M, 1Y, 10Y. Parameters: $S_0=1$ (initial stock price), $r=5\%$ (risk-free rate), $\s_{0}=20\%$ (CEV volatility parameter), $\b=\frac{1}{2}$ (CEV exponent), $\kappa=15\%$ (variance of the Gamma subordinator), $\th=-10\%$ (drift of the Brownian motion), $\s=20\%$ (volatility of the Brownian motion).}
\label{tab:VGCEV} } \end{table}
Figures \ref{fig4}, \ref{fig5} and \ref{fig6} show the cross-sections of absolute (left) and relative (right) errors of the $2^{\text{nd}}$, $3^{\text{rd}}$ and $4^{\text{th}}$ approximations against the Monte Carlo $95\%$ and $99\%$ confidence intervals, marked in dark and light gray respectively. Notice that, for longer maturities and deep out-of-the-money options, the lower order approximations give good results in terms of absolute errors but only the $4^{\text{th}}$ order approximation lies inside the confidence regions. For a more detailed comparison, in Figures \ref{fig5} and \ref{fig6} we plot the $2^{\text{nd}}$ (dotted line), $3^{\text{rd}}$ (dashed line), $4^{\text{th}}$ (solid line) order approximations. Similar results are obtained for a wide range of parameter values.
\section{Appendix: proof of Theorem \ref{t11}} \label{sec:app} In this appendix we prove Theorem \ref{t11} under Assumption A$_{N+1}$ where $N\in\NN$ is fixed. For simplicity we only consider the case of $r=0$ and time-homogeneous coefficients. Recalling notation \eqref{43bis}, we put \begin{equation}\label{42b} \begin{split}
L_{0}=\frac{\a_0}{2} \lf\partial_{xx}-\partial_x\rg +{\partial}_{t} \end{split} \end{equation} and \begin{equation}\label{43ba}
L_{n}=L_{0}+\sum_{k=1}^{n}\a_k(x-\xbar)^k\lf\partial_{xx}-\partial_x\rg, \qquad n\le N. \end{equation}
Our idea is to modify and adapt the standard characterization of the fundamental solution given by the parametrix method originally introduced by Levi \cite{Levi1907}. The parametrix method is a constructive technique that allows to prove the existence of the fundamental solution $\G$ of a parabolic operator with variable coefficients of the form
$$Lu(t,x)= \frac{a(x)}{2}\left({\partial}_{xx}-{\partial}_{x}\right)u(t,x)+{\partial}_{t}u(t,x).$$ In the standard parametrix method, for any fixed $\x\in\R$, the fundamental solution $\G_{\x}$ of the frozen operator
$$L_{\x}u(t,x)= \frac{a(\x)}{2}\left({\partial}_{xx}-{\partial}_{x}\right)u(t,x)+{\partial}_{t}u(t,x)$$ is called a {\it parametrix} for $L$. A fundamental solution $\G(t,x;T,y)$ for $L$ can be constructed starting from $\G_{y}(t,x;T,y)$ by means of an iterative argument and by suitably controlling the errors of the approximation.
Our main idea is to {\it use the $N^{\text{th}}$-order approximation $\G^{N}(t,x;T,y)$ in \eqref{34}-\eqref{2.2} (related to $L_{n}$ in \eqref{42b}-\eqref{43ba}) as a parametrix.} In order to prove the error bound \eqref{81}, we carefully generalize some Gaussian estimates: in particular, for $N=0$ we are back into the classical framework, but in general we need accurate estimates of the solutions of the nested Cauchy problems \eqref{2.2}.
By analogy with the classical approach (see, for instance, \cite{Friedman} or the recent and more general presentation in \cite{DiFrancescoPascucci2}), we have that $\G$ takes the form
$$\G(t,x;T,y)=\G^{N}(t,x;T,y)+\int_{t}^{T}\int_{\R}\G^{0}(t,x;s,\x)\Phi^{N}(s,\x;T,y)d\x ds$$ where $\Phi^{N}$ is the function in \eqref{101} below, which is determined by imposing the condition $L\G=0$. More precisely, we have
$$0=L\G(z;\z)=L\G^{N}(z;\z)+\int_{t}^{T}\int_{\R}L\G^{0}(z;w)\Phi^{N}(w;\z)dw-\Phi^{N}(z;\z),$$ where, to shorten notations, we have set $z=(t,x)$, $w=(s,\x)$ and $\z=(T,y)$. Equivalently, we have
$$\Phi^{N}(z;\z)=L\G^{N}(z;\z)+\int_{t}^{T}\int_{\R}L\G^{0}(z;w)\Phi^{N}(w;\z)dw$$ and therefore by iteration \begin{equation}\label{101}
\Phi^{N}(z;\z)=\sum_{n=0}^{\infty}Z_{n}(z;\z) \end{equation} where \begin{align*}
Z^{N}_{0}(z;\z) & =L\G^{N}(z;\z), \\
Z^{N}_{n+1}(z;\z)& =\int_{t}^{T}\int_{\R}L\G^{0}(z;w)Z_{n}(w;\z)dw. \end{align*} The thesis is a consequence of the following lemmas. \begin{lemma}\label{lemapp1} For any $n\le N$ the solution of \eqref{2.2}, with $L_{n}$ as in \eqref{42b}-\eqref{43ba}, takes the form \begin{equation}\label{Gnlem} G^n(t,x;T,y)=\sum_{ i\le n,\, j\le n(n+3),\, k\le \frac{n(n+5)}{2}\atop i+j-k\ge n} c^n_{i,j,k}(x-\xbar)^i(\sqrt{T-t})^{j}\partial_x^k G^0(t,x;T,y), \end{equation} where $c^n_{i,j,k}$ are polynomial functions of $\a_0,\a_1,\dots,\a_n$. \end{lemma} \begin{proof} We proceed by induction on $n$. For $n=0$ the thesis is trivial. Next by \eqref{2.2} we have $G^{n+1}(t,x;T,y)=I_{n,2}-I_{n,1}$ where
$$I_{n,l}=\sum\limits_{h=1}^{n+1} \a_h\int_t^T\int_{\R}G^0(t,x;s,\eta)
(\eta-\bar{x})^h \pa_{\eta}^{l} G^{n+1-h}(s,\eta;T,y)d\eta ds,\quad l=1,2.$$ We only analyze the case $l=2$ since the other one is analogous. By the inductive hypothesis \eqref{Gnlem}, we have that $I_{n,2}$ is a linear combination of terms of the form \begin{equation}\label{Ia}
\begin{split}
\int_t^T\int_{\R}G^0(t,x;s,\eta) (\sqrt{T-s})^{j}(\eta-\bar{x})^{h+i-p}{\partial}_{\y}^{k+2-p}G^0(s,\eta;T,y)d \eta ds \end{split} \end{equation} for $p=0,1,2$ and $h=1,\dots,n+1$; moreover we have \begin{align}\label{I3a1}
&i+j-k\ge n+1-h,\\ \label{I3a2}
&i\le n+1-h, \\ \label{I3a3}
&j\le (n+1-h)(n+4-h)\le n(n+3),\\ \label{I3a4}
&k\le \frac{(n+1-h)(n+6-h)}{2}\le \frac{n(n+5)}{2}. \end{align} Again we focus only on $p=0$, the other cases being analogous: then by properties \eqref{V}, \eqref{d} and \eqref{repr}, we have that the integral in \eqref{Ia} is equal to \begin{equation}\label{I3a5}
\int_t^T (\sqrt{T-s})^{j}V_{t,s,x}^{h+i}ds\, \pa_{x}^{k+2} G^0(t,x;T,y) \end{equation} where $V_{t,T,x}\equiv V_{t,T,x,0}$ is the operator in \eqref{38}. Now we remark that $V^{n}_{t,s,x}$ is a finite sum of the form \begin{equation}\label{I2a}
V^{n}_{t,s,x}=\sum_{0\le j_{1},\frac{j_{2}}{2},j_{3}\le n \atop j_{1}+j_{2}-j_{3}\ge n}
b^{n}_{j_{1},j_{2},j_{3}}(x-\xbar)^{j_{1}}(\sqrt{s-t})^{j_{2}}{\partial}_{x}^{j_{3}} \end{equation} for some constants $b^{n}_{j_{1},j_{2},j_{3}}$. Thus the integral in \eqref{I3a5} is a linear combination of terms of the form
$$(x-\xbar)^{j_{1}}(\sqrt{T-s})^{j+2+j_{2}} \pa_{x}^{k+2+j_{3}} G^0(t,x;T,y)$$ where \begin{align}\label{and10a}
&0\le j_{1},\,\frac{j_{2}}{2},\,j_{3}\le h+i,\\ \label{and10b}
&j_{1}+j_{2}-j_{3}\ge h+i. \end{align} Eventually we have \begin{align*}
&j_{1}+j+j_{2}+2-(k+2+j_{3})\ge \intertext{(by \eqref{and10b})}
&\ge i+j-k+h \ge \intertext{(by \eqref{I3a1})}
&\ge n+1. \end{align*} On the other hand, by \eqref{and10a} and \eqref{I3a2} we have \begin{align*}
j_{1}\le h+i\le n+1. \end{align*} Moreover, by \eqref{and10a}, \eqref{I3a2} and \eqref{I3a3} we have \begin{align*}
j+2+j_{2}\le j+2+2(n+1)\le n(n+3)+2+2(n+1)=(n+1)(n+4). \end{align*} Finally, by \eqref{and10a}, \eqref{I3a2} and \eqref{I3a4} we have \begin{align*}
k+2+j_{3}&\le k+2+h+i\le k+n+3\\
&\le\frac{n(n+5)}{2}+n+3=\frac{(n+1)(n+6)}{2}. \end{align*} This concludes the proof. \qquad\end{proof}
Now we set $\xbar=y$ and prove the thesis only in this case: to treat the case $\xbar=x$, it suffices to proceed in a similar way by using the backward parametrix method introduced in \cite{CorielliFoschiPascucci2010}. \begin{lemma}\label{lemapp2} For any $\epsilon,\t >0$ there exists a positive constant $C$, only dependent on
$\e,\t,m,M,N$ and $\max\limits_{k\le N}\|\a_{k}\|_{\infty}$, such that \begin{equation}\label{and14}
\left|\partial_{xx}G^n(t,x;T,y)\right| \leq C(T-t)^{\frac{n-2}{2}}\bar{\G}^{M+\epsilon}(t,x;T,y), \end{equation} for any $n\le N$, $x,y\in\R$ and $t,T\in\R$ with $0<T-t\le \t$. \end{lemma} \begin{proof} By Lemma \ref{lemapp1} with $\xbar=y$, we have \begin{align*}
\left|\partial_{xx}G^n(t,x;T,y)\right| &\leq
\sum_{ i\le n,\, j\le n(n+3),\, k\le \frac{n(n+5)}{2}\atop i+j-k\ge n}
\left|c^n_{i,j,k}\right|\left(\sqrt{T-t}\right)^{j}\cdot\\
&\quad\cdot\left|{\partial}_{xx}\left((x-y)^i\partial_x^k G^0(t,x;T,y)\right)\right|. \end{align*} Then the thesis follows from the boundedness of the coefficients $\a_{k}$, $k\le N$, (cf. Assumption A$_{N}$) and the following standard Gaussian estimates (see, for instance, Lemma A.1 and A.2 in \cite{CorielliFoschiPascucci2010}): \begin{equation}\label{and13} \begin{split}
&\partial_x^k G^0(t,x;T,y) \le c\,\left(\sqrt{T-t}\right)^{-k}\bar{\G}^{M+\epsilon}(t,x;T,y),\\
&\left(\frac{x-y}{\sqrt{T-t}}\right)^{k} G^0(t,x;T,y) \le c\,\bar{\G}^{M+\epsilon}(t,x;T,y), \end{split} \end{equation} where $c$ is a positive constant which depends on $k,m,M,\e$ and $\t$. \qquad\end{proof}
\begin{lemma}\label{lemapp3} For any $\epsilon,\t >0$ there exists a positive constant $C$, only dependent on
$\e,\t,m,M,N$ and $\max\limits_{k\le N+1}\|\a_{k}\|_{\infty}$, such that \begin{equation}
\left|Z_n^N(t,x;T,y) \right| \leq \kappa_{n}(T-t)^{\frac{N+n-1}{2}}\bar{\G}^{M+\epsilon}(t,x;T,y), \end{equation} for any $n\in\NN$, $x,y\in\R$ and $t,T\in\R$ with $0<T-t\le \t$, where
$$\kappa_{n}=C^{n}\frac{\G_{E}\left(\frac{1+N}{2}\right)}{\G_{E}\left(\frac{n+1+N}{2}\right)}$$ and $\G_{E}$ denotes the Euler Gamma function. \end{lemma} \begin{proof} On the basis of definitions \eqref{34} and \eqref{2.2}, by induction we can prove the following formula: \begin{equation}\label{and11}
Z_0^N(z;\z) =L\G^N(z;\z)=\sum_{n=0}^N (L-L_{n})G^{N-n}(z;\z). \end{equation} Indeed, for $N=0$ we have
$$L\G^0(z;\z)=(L-L_0)G^{0}(z;\z),$$ because $L_0G^{0}(z;\z)=0$ by definition. Then, assuming that \eqref{and11} holds for $N\in\NN$, for $N+1$ we have \begin{align*}
L\G^{N+1}(z;\z)&= L\G^N(z;\z)+L G^{N+1}(z;\z)=
\intertext{(by inductive hypothesis and \eqref{2.2})}
&= \sum_{n=0}^N (L-L_n)G^{N-n}(z;\z) + (L-L_0)G^{N+1}(z;\z)\\
&\quad- \sum_{n=1}^{N+1} (L_{n}-L_{n-1})G^{N+1-n}(z;\z)\\
&=\sum_{n=1}^{N+1} (L-L_{n-1})G^{N-(n-1)}(z;\z)+ (L-L_0)G^{N+1}(z;\z)\\
&\quad- \sum_{n=1}^{N+1} (L_{n}-L_{n-1})G^{N+1-n}(z;\z)\\
&=(L-L_0)G^{N+1}+\sum_{n=1}^{N+1} (L-L_{n})G^{N+1-n}(z;\z) \end{align*} from which \eqref{and11} follows.
Then, by \eqref{and11} and Assumption A$_{N+1}$ we have \begin{equation}\label{A2}
\left|Z_0^N(z;\z) \right|\leq\sum_{n=0}^N
\|\a_{n+1}\|_{\infty}|x-y|^{n+1}\left|(\partial_{xx}-\pa_x)G^{N-n}(z;\z)\right| \end{equation} and for $n=0$ the thesis follows from estimates \eqref{and14} and \eqref{and13}. In the case $n\ge 1$, proceeding by induction, the thesis follows from the previous estimates by using the arguments in Lemma 4.3 in \cite{DiFrancescoPascucci2}: therefore the proof is omitted. \qquad\end{proof}
\begin{figure}\label{fig1}
\end{figure}
\begin{figure}\label{fig2}
\end{figure}
\begin{figure}\label{fig3}
\end{figure}
\begin{figure}\label{fig4}
\end{figure}
\begin{figure}\label{fig5}
\end{figure}
\begin{figure}\label{fig6}
\end{figure}
\backmatter
\addcontentsline{toc}{chapter}{Bibliography} \markboth{\MakeUppercase{Bibliography}}{}
\end{document} | arXiv |
\begin{document}
\setcounter{page}{1}
\begin{center}ON SEMI-MODULAR SUBALGEBRAS OF LIE ALGEBRAS OVER FIELDS
OF ARBITRARY CHARACTERISTIC \end{center}
\centerline{DAVID A. TOWERS}
\centerline {Department of Mathematics, Lancaster University} \centerline {Lancaster LA1 4YF, England} \centerline {email: [email protected]}
\begin{abstract} This paper is a further contribution to the extensive study by a number of authors of the subalgebra lattice of a Lie algebra. It is shown that, in certain circumstances, including for all solvable algebras, for all Lie algebras over algebraically closed fields of characteristic $p > 0$ that have absolute toral rank $\leq 1$ or are restricted, and for all Lie algebras having the one-and-a-half generation property, the conditions of modularity and semi-modularity are equivalent, but that the same is not true for all Lie algebras over a perfect field of characteristic three. Semi-modular subalgebras of dimensions one and two are characterised over (perfect, in the case of two-dimensional subalgebras) fields of characteristic different from $2, 3$.\end{abstract}
{\bf Keywords:} Lie algebra; subalgebra lattice; modular; semi-modular, quasi-ideal.
{\bf AMS Subject Classification:} 17B05, 17B50, 17B30, 17B20
\section{Introduction}
This paper is a further contribution to the extensive study by a number of authors of the subalgebra lattice of a Lie algebra, and is, in part, inspired by the papers of Varea (\cite{var}, \cite{mod}). A subalgebra $U$ of a Lie algebra $L$ is called \begin{itemize} \item {\em modular} in $L$ if it is a modular element in the lattice of subalgebras of $L$; that is, if \[ <U,B> \cap C = <B, U \cap C> \hspace{.3in} \hbox{for all subalgebras}\hspace{.1in} B \subseteq C, \] and \[ <U,B> \cap C = <B \cap C,U> \hspace{.3in} \hbox{for all subalgebras}\hspace{.1in} U \subseteq C, \] (where, $<U, B>$ denotes the subalgebra of $L$ generated by $U$ and $B$); \item {\em upper modular} in $L$ (um in $L$) if, whenever $B$ is a subalgebra of $L$ which covers $U \cap B$ (that is, such that $U \cap B$ is a maximal subalgebra of $B$), then $<U, B>$ covers $U$; \item {\em lower modular} in $L$ (lm in $L$) if, whenever $B$ is a subalgebra of $L$ such that $<U, B>$ covers $U$, then $B$ covers $U \cap B$; \item {\em semi-modular} in $L$ (sm in $L$) if it is both um and lm in $L$. \end{itemize}
In this paper we extend the study of sm subalgebras started in \cite{sm}. In section two we give an example of a Lie algebra over a perfect field of characteristic three which has a sm subalgebra that is not modular. However, it is shown that for all solvable Lie algebras, and for all Lie algebras over an algebraically closed field of characteristic $p > 0$ that have absolute toral rank $\leq 1$ or are restricted, the conditions of modularity, semi-modularity and being a quasi-ideal are equivalent. The latter extends results of Varea in \cite{mod} where the characteristic of the field is restricted to $p > 7$. It is then shown that for all Lie algebras having the one-and-a-half generation property the conditions of modularity and semi-modularity are equivalent. \par
In section three, sm subalgebras of dimension one are studied. These are characterised over fields of characteristic different from $2, 3$. This result generalises a result of Varea in \cite{var} concerning modular atoms. In the fourth section we show that, over a perfect field of characteristic different from $2, 3$, the only Lie algebra containing a two-dimensional core-free sm subalgebra is $sl_2(F)$. It is also shown that, over certain fields, every sm subalgebra that is solvable, or that is split and contains the normaliser of each of its non-zero subalgebras, is modular. \par
Throughout, $L$ will denote a finite-dimensional Lie algebra over a field $F$. There will be no assumptions on $F$ other than those specified in individual results. The symbol `$\oplus$' will denote a vector space direct sum. If $U$ is a subalgebra of $L$, the {\em core} of $U$, $U_{L}$, is the largest ideal of $L$ contained in $U$; we say that $U$ is {\em core-free} if $U_{L} = 0$. We denote by $R(L)$ the solvable radical of $L$, by $Z(L)$ the centre of $L$, and put $C_L(U) = \{ x \in L : [x, U] = 0 \}$.
\section{General results} We shall need the following result from \cite{sm}.
\begin{lemma}\label{l:pre} Let $U$ be a proper sm subalgebra of a Lie algebra $L$
over an arbitrary field $F$. Then $U$ is maximal and modular in $<U,x>$ for all
$x \in L \setminus U$. \end{lemma}
\noindent {\it Proof}: We have that $U$ is maximal in $<U, x>$, by Lemma 1.4 of \cite{sm}, and hence that $U$ is modular in $<U, x>$, by Theorem 2.3 of \cite{sm}
In \cite{sm} it was shown that, over fields of characteristic zero, $U$ is modular in $L$ if and only if it is sm in $L$. This result does not extend to all fields of characteristic three, as we show next. Recall that a simple Lie algebra is {\em split} if it has a splitting Cartan subalgebra $H$; that is, if the characteristic roots of ad$_{L}h$ are in $F$ for every $h \in H$. Otherwise we say that it is {\em non-split}.
\begin{propo}\label{p:li} Let $L$ be a Lie algebra of dimension greater than three
over an arbitrary field $F$, and suppose that every two linearly
independent elements of $L$ generate a three-dimensional non-split
simple Lie algebra. Then there are maximal subalgebras $M_{1}$,
$M_{2}$ of $L$ such that $M_{1} \cap M_{2} = 0$. \end{propo}
\noindent {\it Proof}: This is proved in Proposition 4 of \cite{las}.
\noindent{\bf Example}
Let $G$ be the algebra constructed by Gein in Example 2 of \cite{gei}. This is a seven-dimensional Lie algebra over a certain perfect field $F$ of characteristic three. In $G$ every linearly independent pair of elements generate a three-dimensional non-split simple Lie algebra. It follows from Proposition \ref{p:li} above that there are two maximal subalgebras $M$, $N$ in $G$ such that $M \cap N = 0$. Choose any $0 \neq a \in M$. Then $<a,N> \cap M = M$, but $<N \cap M,a> = Fa$, so $Fa$ is not a modular subalgebra of $L$. However, it is easy to see that all atoms of $G$ are sm in $G$.
A subalgebra $Q$ of $L$ is called a {\em quasi-ideal} of $L$ if $[Q,V] \subseteq Q + V$ for every subspace $V$ of $L$. It is easy to see that quasi-ideals of $L$ are always semi-modular subalgebras of $L$. When $L$ is solvable the semi-modular subalgebras of $L$ are precisely the quasi-ideals of $L$, as the next result, which is based on Theorem 1.1 of \cite{var}, shows.
\begin{theor}\label{t:solv} Let $L$ be a solvable Lie algebra over an arbitrary field
$F$ and let $U$ be a proper subalgebra of $L$. Then the following
are equivalent: \begin{description} \item[(i) ] $U$ is modular in $L$; \item[(ii) ] $U$ is sm in $L$; and \item[(iii)] $U$ is a quasi-ideal of $L$. \end{description} \end{theor}
\noindent {\it Proof}: (i) $\Rightarrow$ (ii) : This is straightforward. \par (ii) $\Rightarrow$ (iii) : Let $L$ be a solvable Lie algebra of smallest dimension containing a subalgebra $U$ which is sm in $L$ but is not a quasi-ideal of $L$. Then $U$ is maximal and modular in $L$, by Lemma \ref{l:pre}, and $U_L = 0$. Let $A$ be a minimal ideal of $L$. Then $L = U + A$. Moreover, $U \cap A$ is an ideal of $ L$, since $A$ is abelian, whence $U \cap A = 0$ and $L = U \oplus A$. Now $U$ is covered by $<U, A>$ so $A$ covers $U \cap A = 0$. This yields that dim$(A) = 1$ and so $U$ is a quasi-ideal of $L$, a contradiction. \par (iii) $\Rightarrow$ (i) : This is straightforward.
\begin{coro}\label{c:solv} Let $L$ be a solvable Lie algebra over an arbitrary field $F$ and let $U$ be a core-free sm subalgebra of $L$. Then dim$(U) = 1$ and $L$ is almost abelian. \end{coro}
\noindent {\it Proof}: This follows from Theorem \ref{t:solv} and Theorem 3.6 of \cite{amayo}.
We now consider the case when $L$ is not necessarily solvable. First we shall need the following result concerning $psl_3(F)$.
\begin{propo}\label{p:psl} Let $F$ be a field of characteristic 3 and let $L = psl_3(F)$. Then $L$ has no maximal sm subalgebra. \end{propo}
\noindent {\it Proof}: Let $E_{ij}$ be the $3 \times 3$ matrix that has $1$ in the $(i,j)$-position and $0$ elsewhere, and denote by $\overline{E_{ij}}$ the canonical image of $E_{ij} \in sl_3(F)$ in $psl_3(F)$. Put $e_{-3} = \overline{E_{23}}$, $e_{-2} = \overline{E_{31}}$, $e_{-1} = \overline{E_{12}}$, $e_{0} = \overline{E_{11}} - \overline{E_{22}}$, $e_{1} = \overline{E_{21}}$, $e_{2} = \overline{E_{13}}$, $e_{3} = \overline{E_{32}}$. Then $e_{-3}, e_{-2}, e_{-1}, e_0, e_1, e_2, e_3$ is a basis for $psl_3(F)$ with \[ [e_0, e_i] = e_i \hbox{ if } i > 0, \hspace{.2cm} [e_0, e_i] = - e_i \hbox{ if } i < 0 , \hspace{.2cm} [e_{-i}, e_j] = \delta_{ij}e_0 \hbox{ if } i, j > 0 \hspace{.1cm} \hbox{ and} \] \[ [e_i, e_j] = e_{-k} \hspace{.1cm} \hbox{ for every cyclic permutation } (i,j,k) \hbox{ of } (1,2,3) \hbox{ or } (-3,-2,-1). \] Put $B_{i,j} = Fe_0 + Fe_i + Fe_j$ for each non-zero $i, j$. If $i, j$ are of opposite sign then $B_{i,j}$ is a subalgebra, every maximal subalgebra of which is two dimensional. \par
Let $M$ be a maximal sm subalgebra of $L$. For each $i, j$ of opposite sign, if $B_{i,j} \not \subseteq M$ then $M \cap B_{i,j}$ is two dimensional. Since $M$ is at most five-dimensional, by considering the intersection with each of $B_{1,-1}, B_{2,-2}$ and $B_{3,-3}$ it is easy to see that $e_0 \in M$. But then, considering $B_{1,-1}$ again, we have either $e_1 \in M$ or $e_{-1} \in M$. Suppose the former holds. Taking the intersection of $M$ with $B_{2,-3}$ shows that $e_{-3} \in M$; then with $B_{2,-1}$ gives $e_2 \in M$; next with $B_{3,-2}$ gives $e_{-2} \in M$; finally with $B_{3,-1}$ yields $e_3 \in M$. But then $M = L$, a contradiction. A similar contradiction is easily obtained if we assume that $e_{-1} \in M$.
Let $(L_p,[p],\iota)$ be any finite-dimensional $p$-envelope of $L$. If $S$ is a subalgebra of $L$ we denote by $S_p$ the restricted subalgebra of $L_p$ generated by $\iota(S)$. Then the {\em (absolute) toral rank} of $S$ in $L$, $TR(S,L)$, is defined by \[ TR(S,L) = \hbox{max} \{\hbox{dim}(T) : T \hbox{ is a torus of } (S_p + Z(L_p))/Z(L_p)\}. \] This definition is independent of the $p$-envelope chosen (see \cite{strade}). We write $TR(L,L) = TR(L)$. Then, following the same line of proof, we have an extension of Lemma 2.1 of \cite{mod}.
\begin{lemma}\label{l:trone} Let $L$ be a Lie algebra over an algebraically closed field of characteristic $p > 0$ such that $TR(L) \leq 1$. Then the following are equivalent: \begin{description} \item[(i) ] $U$ is modular in $L$; \item[(ii) ] $U$ is sm in $L$; and \item[(iii)] $U$ is a quasi-ideal of $L$. \end{description} \end{lemma}
\noindent {\it Proof}: We need only show that (ii) $\Rightarrow$ (iii). Let $U$ be a sm subalgebra of $L$ that is not a quasi-ideal of $L$. Then there is an $x \in L$ such that $<U, x> \neq U + Fx$. We have that $U$ is maximal and modular in $<U, x>$, by Lemma \ref{l:pre}, and $<U, x>$ is not solvable, by Theorem \ref{t:solv}. Furthermore $TR(<U, x>) \leq TR(L) \leq 1$, by Proposition 2.2 of \cite{strade}, and $<U, x>$ is not nilpotent so $TR(<U, x>) \neq 0$, by Theorem 4.1 of \cite{strade}, which yields $TR(<U, x>) = 1$. We may therefore suppose that $U$ is maximal and modular in $L$, of codimension greater than one in $L$, and that $TR(L) = 1$. \par
Put $L^{\infty} = \bigcap_{n \geq 1} L^n$. Suppose first that $R(L^{\infty}) \not \leq U$. Then $U \cap R(L^{\infty})$ is maximal and modular in the solvable subalgebra $R(L^{\infty})$, so $U \cap R(L^{\infty})$ has codimension one in $R(L^{\infty})$. Since $U$ is maximal in $L$ we have $L = U + R(L^{\infty})$ and so dim$(L/U) = 1$, which is a contradiction. This yields that $R(L^{\infty}) \leq U$. Moreover, $L^{\infty} \not \leq U$, since this would imply that $U/L^{\infty}$ is maximal in the nilpotent algebra $L/L^{\infty}$, giving dim$(L/U) = 1$, a contradiction again. It follows that $(U \cap L^{\infty})/R(L^{\infty})$ is modular and maximal in $L^{\infty}/R(L^{\infty})$. But now $L^{\infty}/R(L^{\infty})$ is simple, by Theorem 2.3 of \cite{wint}, and $1 = TR(L) \geq TR(L^{\infty},L) \geq TR(L^{\infty}/R(L^{\infty}))$ by section 2 of \cite{strade}, so $TR(L^{\infty}/R(L^{\infty})) = 1$. This implies that \[ p \neq 2, \hspace{.3cm} L^{\infty}/R(L^{\infty}) \in \{sl_2(F), W(1:\underline{1}), H(2:\underline{1})^{(1)}\} \hbox{ if } p >3 \] \[ \hbox{ and } \hspace{.3cm} L^{\infty}/R(L^{\infty}) \in \{sl_2(F), psl_3(F)\} \hbox{ if } p = 3, \] by \cite{premet} and \cite{sk}. \par
Now $H(2:\underline{1})^{(1)}$ has no modular and maximal subalgebras, by Corollary 3.5 of \cite{var}; likewise $psl_3(F)$ by Proposition \ref{p:psl}. It follows that $L^{\infty}/R(L^{\infty})$ is isomorphic to $W(1:\underline{1})$, which has just one proper modular subalgebra and this has codimension one, by Proposition 2.3 of \cite{var}, or to $sl_2(F)$ in which the proper modular subalgebras clearly have codimension one. Hence dim$(L^{\infty}/(U \cap L^{\infty}) = 1$. Since $L = U + L^{\infty}$ we conclude that dim$(L/U) =$ dim$(L^{\infty}/(U \cap L^{\infty}) = 1$. This contradiction gives the claimed result.
We then have the following extension of Theorem 2.2 of \cite{mod}. The proof is virtually as given in \cite{mod}, but as the restriction to characteristic $> 7$ has been removed the details need to be checked carefully. The proof is therefore included for the convenience of the reader.
\begin{theor}\label{t:restricted} Let $L$ be a restricted Lie algebra over an algebraically closed field $F$ of characteristic $p > 0$, and let $U$ be a proper subalgebra of $L$. Then the following
are equivalent: \begin{description} \item[(i) ] $U$ is modular in $L$; \item[(ii) ] $U$ is sm in $L$; and \item[(iii)] $U$ is a quasi-ideal of $L$. \end{description} \end{theor}
\noindent {\it Proof}: As before it suffices show that (ii) $\Rightarrow$ (iii). Let $U$ be a sm subalgebra of $L$ that is not a quasi-ideal of $L$. Then there is an $x \in L$ such that $<U, x> \neq U + Fx$. First note that $<U, x>$ is a restricted subalgebra of $L$. For, suppose not and pick $z \in <U, x>_p$ such that $z \notin <U, x>$. Since $<U, x>$ is an ideal of $<U, x>_p$ we have that $[z, U] \leq \hspace{.1cm} <U, x> \cap <U, z>$. But $U$ is maximal in $<U, z>$, by Lemma \ref{l:pre}, and so $<U, x> \cap <U, z> = U$, giving $[z, U] \leq U$. But $U$ is self-idealizing, by Lemma 1.5 of \cite{sm}, so $z \in U$. This contradiction proves the claim. So we may as well assume that $L = <U, x>$. Moreover, $U$ is restricted since it is self-idealizing, whence $(U_L)_p \leq U$. As $(U_L)_p$ is an ideal of $L$ we have that $U_L = (U_L)_p$. It follows that $L/U_L$ is also restricted. We may therefore assume that $U$ is a core-free modular and maximal subalgebra of $L$ of codimension greater than one in $L$. \par
Now $L$ is spanned by the centralizers of tori of maximal dimension, by Corollary 3.11 of \cite{wint}, so there is such a torus $T$ with $C_L(T) \not \leq U$. Let $L = C_L(T) \oplus \sum L_{\alpha}(T)$ be the decomposition of $L$ into eigenspaces with respect to $T$. We have that $C_L(T)$ is a Cartan subalgebra of $L$, by Theorem 2.14 of \cite{wint}. It follows from the nilpotency of $C_L(T)$ and the modularity of $U$ that $U \cap C_L(T)$ has codimension one in $C_L(T)$. \par Now let $L^{(\alpha)} = \sum_{i \in P} L_{i \alpha}(T)$, where $P$ is the prime field of $F$, be the $1$-section of $L$ corresponding to a non-zero root $\alpha$. From the modularity of $U$ we see that $U \cap L^{(\alpha)}$ is a modular and maximal subalgebra of $L^{(\alpha)}$. Since $U$ is core-free and self-idealizing, $Z(L) = 0$. But then $TR(T,L) = TR(L)$, since $T$ is a maximal torus, whence $TR(L^{(\alpha)}) \leq 1$, by Theorem 2.6 of \cite{strade}. It follows from Lemma \ref{l:trone} that $M \cap L^{(\alpha)}$ is a quasi-ideal of $L^{(\alpha)}$. As $U \cap L^{(\alpha)}$ is maximal in $L^{(\alpha)}$, we have that dim$(L^{(\alpha)}/(U \cap L^{(\alpha)})) \leq 1$ and $L^{(\alpha)} = U \cap L^{(\alpha)} + C_L(T)$. This yields that $L = U + C_L(T)$ and hence that dim$(L/U) =$ dim$(C_L(T)/(U \cap C_L(T))) = 1$, a contradiction. The result follows.
We shall say that the Lie algebra $L$ has the {\em one-and-a-half generation property} if, given any $0 \neq x \in L$, there is an element $y \in L$ such that $<x, y> = L$. Then we have the following result.
\begin{theor}\label{t:gen} Let $L$ be a Lie algebra, over any field $F$, which has the one-and-a-half generation property. Then every sm subalgebra of $L$ is a modular maximal subalgebra of $L$. \end{theor}
\noindent {\it Proof}: Let $U$ be a sm subalgebra of $L$ and let $0 \neq u \in U$. Then there is an element $x \in L$ such that $L = <u, x> = <U, x>$. It follows from Lemma \ref{l:pre} that $U$ is modular and maximal in $L$.
\begin{coro}\label{c:class} Let $L$ be a Lie algebra over an infinite field $F$ of characteristic different from $2, 3$ which is a form of a classical simple Lie algebra. Then every sm subalgebra of $L$ is a modular maximal subalgebra of $L$. \end{coro}
\noindent {\it Proof}: Under the given hypotheses $L$ has the one-and-a-half generation property, by Theorem 2.2.3 and section 1.2.2 of \cite{bois}, or by \cite{eld}.
We also have the following analogue of a result of Varea from \cite{var}.
\begin{coro}\label{c:zass} Let $F$ be an infinite perfect field of characteristic $p > 2$, and assume that $p^n \neq 3$. Then the subalgebra $W(1: \bf{n})_0$ is the unique sm subalgebra of $W(1: \bf{n})$. \end{coro}
\noindent {\it Proof}: Let $L = W(1: \bf{n})$ and let $\Omega$ be the algebraic closure of $F$. Then $L \otimes_F \Omega$ is simple and has the one-and-a-half generation property, by Theorem 4.4.8 of \cite{bois}. It follows that $L$ has the one-and-a-half generation property (see section 1.2.2 of \cite{bois}). Let $U$ be a sm subalgebra of $L$. Then $U$ is modular and maximal in $L$ by Theorem \ref{t:gen}. Suppose that $U \neq L_0$. Then $L = U + L_0$ and $U \cap L_0$ is maximal in $L_0$. But $L_0$ is supersolvable (see Lemma 2.1 of \cite{ad} for instance) so dim$(L_0/(L_0 \cap U)) = 1$. It follows that dim$(L/U)$ = dim$(L_0/(L_0 \cap U)) = 1$, whence $U = L_0$, which is a contradiction.
\section{Semi-modular atoms} We say that $L$ is {\em almost abelian} if $L = L^{2} \oplus Fx$ with ${\rm ad}\,x$ acting as the identity map on the abelian ideal $L^{2}$. A {\em $\mu$-algebra} is a non-solvable Lie algebra in which every proper subalgebra is one dimensional. A subalgebra $U$ of a Lie algebra $L$ is a {\em strong ideal} (respectively, {\em strong quasi-ideal}) of $L$ if every one-dimensional subalgebra of $U$ is an ideal (respectively, quasi-ideal) of $L$; it is {\em modular*} in $L$ if it satisfies a dualised version of the modularity conditions, namely \[ <U,B> \cap C = <B, U \cap C> \hspace{.3in} \hbox{for all subalgebras}\hspace{.1in} B \subseteq C, \] and \[ <U \cap B, C> = <B, C> \cap U \hspace{.3in} \hbox{for all subalgebras}\hspace{.1in} C \subseteq U. \]
\noindent{\bf Example}
Let $K$ be the three-dimensional Lie algebra with basis $a, b, c$ and multiplication $[a,b] = c$ , $[b,c] = b$ , $[a,c] = a$ over a field of characteristic two. Then $K$ has a unique one-dimensional quasi-ideal, namely $Fc$. Thus for each $0 \not = u\in Fc$ and $k\in K \setminus Fc$ we have that $<u,k>$ is two dimensional. However $K$ is not almost abelian. In fact $K$ is simple, $Fc$ is core-free and is the Frattini subalgebra of $K$, and so any two linearly independent elements not in $Fc$ generate $K$.
We shall need a result from \cite{bowv}. However, because of the above example, there is a (slight) error in three results in this paper. The error comes from an incorrect use of Theorem 3.6 of \cite{amayo}. The three corrected results are as follows:
\begin{lemma}\label{l:strong} (Lemma 2.2 of \cite{bowv}) If $Q$ is a strong quasi-ideal of $L$, then $Q$ is a strong ideal of $L$, or $L$ is almost abelian, or $F$ has characteristic two, $L = K$ and $Q = Fc$. \end{lemma}
\noindent {\it Proof}: Assume that $Q$ is a strong quasi-ideal and that there exists $q \in Q$ such that $Fq$ is not an ideal of $L$. Then Theorem 3.6 of \cite{amayo} gives that $L$ is almost abelian, or $F$ has characteristic two, $L = K$ and $Q = Fc$. The result follows.
The proof of the following result is the same as the original.
\begin{propo}\label{p:mod*} (Proposition 2.3 of \cite{bowv}) Let $Q$ be a proper quasi-ideal of a Lie algebra $L$ which is modular* in $L$. Then $Q$ is a strong quasi-ideal and so is given by Lemma \ref{l:strong}. \end{propo}
\begin{lemma}\label{l:mu} (Lemma 4.1 of \cite{bowv}) Let $L$ be a Lie algebra over an arbitrary field $F$. Let $U$ be a core-free subalgebra of $L$ such that $<u,z>$ is either two dimensional or a $\mu$-algebra for every $0\not = u\in U$ and $z\in L \setminus U$. Then one of the following holds: \begin{description} \item[(i) ] $L$ is almost abelian; \item[(ii) ] $<u,z>$ is a $\mu$-algebra for every $0\not = u\in U$ and $z\in L \setminus U$; \item[(iii)] $F$ has characteristic two, $L=K$ and $Fu = Fc$. \end{description} \end{lemma}
\noindent {\it Proof}: This is the same as the original proof except that the following should be inserted at the end of sentence six: ``or char$F=2$ and $L=K$''.
Using the above we now have the following result.
\begin{lemma}\label{l:atom} Suppose that $Fu$ is sm in $L$ but not an ideal of $L$.
Then one of the following holds: \begin{description} \item[(i) ] $L$ is almost abelian; \item[(ii)] $<u,x>$ is a $\mu$-algebra for every $x \in L \setminus
Fu$; \item[(iii)] $F$ has characteristic two, $L=K$ and $Fu=Fc$. \end{description} \end{lemma}
\noindent {\it Proof}: Pick any $x \in L \setminus Fu$. Then $Fu$ is maximal
in $<u,x>$, by Lemma \ref{l:pre}. Now let $M$ be a maximal subalgebra of
$<u,x>$. If $u \in M$ then $M = Fu$. So suppose that $u \not \in
M$. Then $Fu$ is a maximal subalgebra of $<u,x> = <u,M>$, whence
$Fu \cap M = 0$ is maximal in $M$, since $Fu$ is lm in $L$. It follows that
every maximal subalgebra of $<u,x>$ is one dimensional. The claimed result
now follows from Lemma \ref{l:mu}.
We shall need the following result concerning `one-and-a-half generation' of rank one simple Lie algebras over infinite fields of characteristic $\neq 2,3$.
\begin{theor}\label{t:simple} Let $L$ be a rank one simple Lie algebra over an infinite
field $F$ of characteristic $\neq 2,3$ and let $Fx$ be a Cartan
subalgebra of $L$. Then there is an element $y \in L$ such that
$<x,y> = L$. \end{theor}
\noindent {\it Proof}: Since $L$ is rank one simple it is central simple. Let
$\Omega$ be the algebraic closure of $F$ and put $L_{\Omega} = L
\otimes_{F} \Omega$, and so on. Then $L_{\Omega}$ is simple and
$\Omega x$ is a Cartan subalgebra of $L_{\Omega}$. Let \[ L_{\Omega} = \Omega x \oplus \sum_{\alpha \in \Phi} (L_{\Omega})_{\alpha} \] be the decomposition of $L_{\Omega}$ into its root spaces relative to $\Omega x$. Then, with the given restrictions on the characteristic of the field, every root space $(L_{\Omega})_{\alpha}$ is one dimensional (see \cite{bo}). \par Let $M$ be a maximal subalgebra of $L$ containing $x$. Then $M_{\Omega}$ is a subalgebra of $L_{\Omega}$ and $\Omega x \subseteq M_{\Omega}$. So, $M_{\Omega}$ decomposes into root spaces relative to $\Omega x$, \[ M_{\Omega} = \Omega x \oplus \sum_{\alpha \in \Delta} (M_{\Omega})_{\alpha}. \] We have that $\Delta \subseteq \Phi$ and $(M_{\Omega})_{\alpha} \subseteq (L_{\Omega})_{\alpha}$ for all $\alpha \in \Delta$. As $(L_{\Omega})_{\alpha}$ is one dimensional for every $\alpha \in \Phi$, we have $(M_{\Omega})_{\alpha} = (L_{\Omega})_{\alpha}$ for every $\alpha \in \Delta$. Hence there are only finitely many maximal subalgebras of $L$ containing $x$: $M_{1}, \dots , M_{r}$ say. Since $F$ is infinite, $\cup_{i=1}^{r} M_{i} \neq L$, so there is an element $y \in L$ such that $y \not \in M_{i}$ for all $1 \leq i \leq r$. But now $<x,y> = L$, as claimed.
If $U$ is a subalgebra of $L$, then the {\em normaliser} of $U$ in $L$ is the set $$N_{L}(U) = \{x \in L : [x, U] \subseteq U\}.$$ We can now give the following characterisation of one-dimensional semi-modular subalgebras of Lie algebras over fields of characteristic $\neq 2,3$.
\begin{theor}\label{t:smatom} Let $L$ be a Lie algebra over a field $F$, of
characteristic $\neq 2,3$ if $F$ is infinite. Then $Fu$ is sm in $L$ if and only if
one of the following holds: \begin{description} \item[(i) ] $Fu$ is an ideal of $L$; \item[(ii) ] $L$ is almost abelian and ad $u$ acts as a non-zero
scalar on $L^{2}$; \item[(iii)] $L$ is a $\mu$-algebra. \end{description} \end{theor}
\noindent {\it Proof}: It is easy to check that if (i), (ii), or (iii) hold then
$Fu$ is sm in $L$. So suppose that $Fu$ is sm in $L$, but that (i),
(ii) do not hold. First we claim that $L$ is simple.
\par Suppose not, and let $A$ be a minimal ideal of $L$. If $u \in A$, choose any $b \in L \setminus A$. Then $<u,b> \cap A$ is an ideal of $<u,b>$. Since $0 \neq u \in <u,b> \cap A$ and $b \not \in A$, $<u,b>$ cannot be a $\mu$-algebra. But then $L$ is almost abelian, by Lemma \ref{l:atom}, a contradiction. So $u \not \in A$. By Lemma 3.3 of \cite{sm}, $ua = \lambda a$ for all $a \in A$ and some $\lambda \in F$. But now $Fu + Fa$ is a two-dimensional subalgebra of $<u,a>$, a $\mu$-algebra, which is impossible. Hence $L$ is simple. \par Now $Fu$ is um in $L$ and not an ideal of $L$, so $N_{L}(Fu) = Fu$, by Lemma 1.5 of \cite{sm}. Hence $Fu$ is a Cartan subalgebra of $L$, and $L$ is rank one simple. Now $F$ cannot be finite, since there are no $\mu$-algebras over finite fields, by Corollary 3.2 of \cite{farn}. Hence $F$ is infinite. But then there is an element $y \in L$ such that $<u,y> = L$, by Theorem \ref{t:simple}, and $L$ is a $\mu$-algebra. The result is established.
As a corollary to this we have a result of Varea, namely Corollary 2.3 of \cite{ss}.
\begin{coro}\label{c:matom} (Varea) Let $L$ be a Lie algebra over a perfect field $F$, of characteristic $\neq 2,3$ if $F$ is infinite. If $Fu$ is modular in $L$ but not an ideal of $L$ then $L$ is either almost abelian or three-dimensional non-split simple. \end{coro}
\noindent {\it Proof}: This follows from Theorem \ref{t:smatom} and the fact that with the stated restrictions on $F$ the only $\mu$-algebras are three-dimensional non-split simple (Proposition 1 of \cite{gei}).
\section{Semi-modular subalgebras of higher dimension} First we consider two-dimensional semi-modular subalgebras. We have the following analogue of Theorem 1.6 of \cite{var}.
\begin{theor}\label{t:twodim} Let $L$ be a Lie algebra over a perfect field $F$ of characteristic different from 2, 3, and let $U$ be a two-dimensional core-free sm subalgebra of $L$. Then $L \cong sl_{2}(F)$. \end{theor}
\noindent {\it Proof}: If $U$ is modular then the result follows from Theorem 1.6 of \cite{var}, so we can assume that $U$ is not a quasi-ideal of $L$. Thus, there is an element $x \in L$ such that $<U,x> \neq U + Fx$. Put $V = <U,x>$. Then $U_{V} = U$ implies that $<U,x> = U + Fx$, a contradiction; if $U_{V} = 0$ then $V \cong sl_{2}(F)$ by Lemma \ref{l:pre} and Theorem 1.6 of \cite{var}, and $<U,x> = U + Fx$, a contradiction. It follows that dim$(U_{V}) = 1$. Put $U_{V} = Fu$. Now dim$(U/U_{V}) = 1$ and $V/U_{V}$ is three-dimensional non-split simple, by Theorem \ref{t:smatom} and Proposition 1 of \cite{gei}. Thus $V = Fu \oplus S$, where $S$ is three-dimensional non-split simple, by Lemma 1.4 of \cite{var}, and $Fu$, $S$ are ideals of $V$. \par Now we claim that $0 \neq Z(<U,y>) \subseteq U$ for every $y \in L \setminus U$. We have shown this above if $<U,y> \neq U + Fy$. So suppose that $<U,y> = U + Fy$. Then $<U,y>$ is three dimensional and not simple (since $U$ is two dimensional and abelian), and so solvable. Then, by using Corollary \ref{c:solv}, we have that $U$ contains a one-dimensional ideal $K$ of $U + Fy$ such that $(U + Fy)/K$ is two-dimensional non-abelian, and $K = Z(<U, y>)$. \par Since $U$ is maximal in $<U,x>$ we have $<U,x> \neq L$. Pick $y \in$ \mbox{$L \setminus <U,x>$}. Then $0 \neq Z(<U,x+y>) \subseteq U$ by the above. Suppose $Z(<U,x>) \neq Z(<U,y>)$. Then $U = Z(<U,x>) \oplus Z(<U,y>)$. Let $0 \neq z \in Z(<U,x+y>)$ and write $z = z_{1} + z_{2}$ where $z_{1} \in$ \mbox{$Z(<U,x>)$}, $z_{2} \in Z(<U,y>)$. Then $0 = [z,(x + y)] = [z_{2},x] + [z_{1},y]$, so $[z_{2},x] = - [z_{1},y]$. Now, if $z_{1} = 0$, then $[z_{2},x] = 0$, whence $z_{2} \in Z(<U,x>) \cap$ \mbox{$Z(<U,y>)$}, a contradiction. Similarly, if $z_{2} = 0$, then $[z_{1},y] = 0$, whence $z_{1} \in$ \mbox{$Z(<U,x>) \cap Z(<U,y>)$}, a contradiction again. Hence $z_{1}, z_{2} \neq 0$. Since $z_{1}, z_{2} \in U$ we deduce that $[z_{1}, y] = -[z_{2},x] \in <U,x> \cap <U,y> = U$. Thus $y \in N_{L}(U) = U$, a contradiction. It follows that $Z(<U,x>) =$ \mbox{$Z(<U,y>)$} for all $y \in L$, whence $[L, Z(<U,x>)] = 0$ and $Z(<U,x>)$ is an ideal of $L$, contradicting the fact that $U$ is core-free.
Next we establish analogues of two results of Varea from \cite{var}.
\begin{theor}\label{t:vsolv} Let $L$ be a Lie algebra over an algebraically closed field $F$ of characteristic $p > 5$. If $U$ is a sm subalgebra of $L$ such that $U/U_L$ is solvable and dim$(U/U_L) > 1$, then $U$ is modular in $L$, and hence $L/U_L$ is isomorphic to $sl_2(F)$ or to a Zassenhaus algebra. \end{theor}
\noindent {\it Proof}: Let $L$ be a Lie algebra of minimal dimension having a sm subalgebra $U$ which is not modular in $L$, and such that $U/U_L$ is solvable and dim$(U/U_L) > 1$. Then $U_L = 0$ and $U$ is solvable. Since $U$ is not a quasi-ideal there is an element $x \in L \setminus U$ such that $S = <U,x> \neq U + Fx$. Let $K = U_S$. If dim$(U/K) = 1$ then $S/K$ is almost abelian, by Theorem \ref{t:smatom}, whence $U$ is a quasi-ideal of $S$, a contradiction. It follows that dim$(U/K) > 1$. If $U/K$ is modular in $S/K$ then dim$(S/U) = 1$, by Theorem 2.4 of \cite{var}, a contradiction. The minimality of $L$ then implies that $S = L$. This yields that $U$ is modular in $L$, by Lemma \ref{l:pre}. This contradiction establishes the result.
We say that the subalgebra $U$ of $L$ is {\em split} if ad$_Lx$ is split for all $x \in U$; that is, if ad$_Lx$ has a Jordan decomposition into semisimple and nilpotent parts for all $x \in U$.
\begin{theor}\label{t:vstar} Let $L$ be a Lie algebra over a perfect field $F$ of characteristic $p$ different from 2. If $U$ is a sm subalgebra of $L$ which is split and which contains the normaliser of each of its non-zero subalgebras, then $U$ is modular, and one of the following holds: \begin{description} \item[(i)] $L$ is almost abelian and dim$(U) = 1$; \item[(ii)] $L \cong sl_2(F)$ and dim$(U) = 2$; \item[(iii)] $L$ is a Zassenhaus algebra and $U$ is its unique subalgebra of codimension one in $L$. \end{description} \end{theor}
\noindent {\it Proof}: Let $L$ be a Lie algebra of minimal dimension having a sm subalgebra $U$ which is split and which contains the normaliser of each of its non-zero subalgebras, but which is not modular in $L$. Since $U$ is not a quasi-ideal there is an element $x \in L \setminus U$ such that $S = <U,x> \neq U + Fx$. If $S \neq L$ then $U$ is modular in $S$, by the minimality of $L$. It follows from Theorem 2.7 of \cite{var} that $U$ is a quasi-ideal of $S$, a contradiction. Hence $S = L$. Once again we see that $U$ is modular in $L$, by Lemma \ref{l:pre}. This contradiction establishes that $U$ is modular in $L$. The result now follows from Theorem 2.7 of \cite{var}.
\end{document} | arXiv |
Yoichi Miyaoka
Yoichi Miyaoka (宮岡 洋一, Miyaoka Yōichi) is a mathematician who works in algebraic geometry and who proved (independently of Shing-Tung Yau's work) the Bogomolov–Miyaoka–Yau inequality in an Inventiones Mathematicae paper.[1]
In 1984, Miyaoka extended the Bogomolov–Miyaoka–Yau inequality to surfaces with quotient singularities, and in 2008 to orbifold surfaces. Doing so, he obtains sharp bound on the number of quotient singularities on surfaces of general type. Moreover, the inequality for orbifold surfaces gives explicit values for the coefficients of the so-called Lang-Vojta conjecture relating the degree of a curve on a surface with its geometric genus.
References
1. Miyaoka, Yoichi (1977-12-01). "On the Chern numbers of surfaces of general type". Inventiones Mathematicae. 42 (1): 225–237. Bibcode:1977InMat..42..225M. doi:10.1007/BF01389789. ISSN 1432-1297. S2CID 120699065.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Japan
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Focal Spot and Wavefront Sensing of an X-Ray Free Electron laser using Ronchi shearing interferometry
Bob Nagler1,
Andrew Aquila1,
Sébastien Boutet1,
Eric C. Galtier1,
Akel Hashim1,
Mark S. Hunter1,
Mengning Liang1,
Anne E. Sakdinawat1,
Christian G. Schroer ORCID: orcid.org/0000-0002-9759-12002,3,
Andreas Schropp2,
Matthew H. Seaberg1,
Frank Seiboth1,2,
Tim van Driel1,
Zhou Xing1,
Yanwei Liu1 &
Hae Ja Lee1
Scientific Reports volume 7, Article number: 13698 (2017) Cite this article
Imaging and sensing
The Linac Coherent Light Source (LCLS) is an X-ray source of unmatched brilliance, that is advancing many scientific fields at a rapid pace. The highest peak intensities that are routinely produced at LCLS take place at the Coherent X-ray Imaging (CXI) instrument, which can produce spotsize at the order of 100 nm, and such spotsizes and intensities are crucial for experiments ranging from coherent diffractive imaging, non-linear x-ray optics and high field physics, and single molecule imaging. Nevertheless, a full characterisation of this beam has up to now not been performed. In this paper we for the first time characterise this nanofocused beam in both phase and intensity using a Ronchi Shearing Interferometric technique. The method is fast, in-situ, uses a straightforward optimization algoritm, and is insensitive to spatial jitter.
Experimental Description
The start of operations of Free electron laser both in the Extreme UV1,2 and in the hard X-ray regime3,4,5 has created sources of unmatched brilliance, that are advancing many scientific fields at a rapid pace (see6,7 and references therein). A complete focal characterization in both intensity and phase is of crucial importance in application such as coherent diffractive imaging8, non-linear x-ray optics and high field physics, and single molecule imaging where the highest X-ray intensities are sought.
A standard technique currently used involves evaluating the size of damage craters created by the focused X-ray beam in a target9. While this method has the advantage that it measure the whole intensity profile (i.e. both the coherent and incoherent part), it requires a time consuming post mortem analysis of many such imprints, is not an in-situ method, and has limited spatial resolution. Scanning coherent diffraction microscopy or ptychography has been successfully used to characterise a focused X-FEL beam10. However, ptychography requires a 2D motorized translation stage with approximately 10 nm resolution and a beam pointing stability of the same order, which is not always available. Alternatively, interferometric methods using shearing interferometry have been pursued both at Free electron laser facilities11 and at synchrotron facilities12,13. Here we present a method to fully determine the focus of an X-FEL using Ronchi shearing interferometry. Ronchi testing has a long history in characterizing the quality of focusing optics in optical wavelengths14. It has more recently been used to qualitatively evaluate the X-ray optics at both synchrotron facilities15 and at X-ray Free Electron Lasers16. In this paper we show a full characterization of the amplitude and phase of a nano-focused X-FEL beam using Ronchigrams.
The experiments were performed at the Coherent X-ray Imaging instrument (CXI)17,18 beamline at the Linac Coherent Light Source (LCLS). The CXI instrument has a pair of highly polished Kirkpatrick-Baez (KB) mirrors19 coated with Silicon Carbide20 to focus the beam to a theoretical minimal spot size of 90 nm by 150 nm17,21. In contrast to the beamlines that use Beryllium lenses to create a focus, it doesn't suffer from chromatic abberation and its aperture is large enough to capture the full beam. Therefore, it routinely creates the highest peak X-ray intensities of the facility, estimated to be in the order of 1 × 1020 W/cm2. It has been used in many fluence dependent experiments such as the formation of hollow atoms22, anomalous nonlinear X-ray Compton scattering23, and radiation damage studies on protein microcrystals24. In the experiment presented here, the LCLS beam, with a photon energy of 7.2 keV was focused with the KB-mirror pair, which has focal lengths of 900 mm in the horizontal direction, and 500 mm in the vertical direction. A one dimensional diffraction grating, (i.e. the Ronchi target) is placed 9.3 mm downstream of the X-ray focal plane. An X-ray detector is placed 982 mm downstream of the Ronchi target. A conceptual sketch of the setup can be seen in Fig. 1.
Conceptual sketch (not to scale) of the setup. The period of the Ronchi grating is chosen such that order +1 and −1 do not overlap, while maintaining as large an overlap between order 0 and the first orders. Only the 0 and +1 orders are shown for clarity.
Ronchi gratings were fabricated on 4 μm thick polished diamond membranes (Diamond Materials GmbH). An 8 nm layer of Ti was evaporated on the diamond, and 150 nm hydrogen silsesquioxane (HSQ) resist was spun on top of the Ti. Electron beam lithography was performed using a 100 keV beam with doses ranging from 2200–2600 μC/cm2. The gratings were then developed in 25% wt tetramethyl ammonium hydroxide (TMAH) for 100 seconds and rinsed with isopropyl alcohol and deionized water. Transfer of the HQ grating pattern into diamond was performed using reactive ion etching. A 15 second titanium etch using chlorine was used to remove the Ti layer. The diamond was then etched for 50 minutes using an O2/Ar plasma (33/17 sccm, 10 mTorr, RIE power = 100 W) until the etch depth reached 1.1 μm. Using atomic layer deposition (ALD), 78 nm platinum was deposited conformally to fill the diamond gratings.
The Ronchi target functions as a diffraction grating for the incoming, focused X-rays. The spatial frequency of the grating is chosen such that the first orders overlap with the fundamental, but do not overlap with each other. The best configuration is attained when the diffraction angle is half of the full-angle divergence of the focused X-ray beam. Indeed, for a larger diffraction angle the overlap is less, and there is a central part of the beam for which there is no interference data available, and therefore the phase will not be determined by the measurement in this area. On the other hand, if the diffraction angle is too small, the +1 and −1 orders will overlap, interfere with each other and not only with the zeroth order, and the phase recovery method will not work. When the spatial period of the grating equals 2f # λ, with f # the f-number of the optic and λ the wavelength of the X-rays, the ideal overlap between the first orders and the zeroth order is attained. The zeroth and first order interfere and cause a fringe pattern on the detector. Figure 1 shows that this pattern arises from the interference of two sources that emit spherical wavefronts: the focus of the zeroth order, and the (virtual) focus of the first order. The fringes are therefore analogous to those of Young's double slit experiment, and it can easily be shown that the phase difference between the zeroth order and first order changes linearly with x according to
$${\rm{\Delta }}\phi =2\pi \frac{{z}_{1}}{D{z}_{2}}x+{C}_{x}$$
with z 2 the distance between the X-ray camera and the focus, z 1 the distance between the Ronchi grating and the focus, D the period of the grating, and C x a constant in x (see methods section for more details). The phase difference results in a linear fringe pattern on the detector, called Ronchigrams. Three Ronchigrams of the beam are seen in Fig. 2(a–c), where the analysis mask of the zeroth order is shown by the red rectangle, and the positions of the +1 and −1 orders by the blue and green rectangles respectively. The fringe density can be tuned to the resolution of the camera by translating the grating with respect to the focus, which effectively changes z 1 in Eq. (1).
Top (a–c): the three ronchigrams used to calculate the phase of the X-ray beam, with a magnified close-up of the fringes below. The images are taken with gratings with the different spatial periods (225 nm for (a,c), and 275 nm for (b)), at the same position with respect to the focus, but with an angle of the grating with respect to the vertical of −37.9° for (a), −15.4° for (b) and 29.6° for (c). The red rectangle is the analysis mask of the zeroth order, the green and blue rectangles are the position of the −1 and +1 orders respectively. There is no interference and hence no phase information in the white shade area in (a). Bottom (d–f): False-color image of the difference (i.e. errors) between the phase derived from the Ronchigrams, and the phase derived after re-shearing the recovered wavefront of the beam. RMS error of the images is λ/55 for (a) and λ/40 for (b,c).
As can be seen in Fig. 2 the X-ray beam on the camera looks very asymmetrical, and has two big vertical lobes. The two lobes are caused by the fact the beam overfills a steering mirror located approximately 330 m upstream of the CXI endstation, causing this feature in the far field image. The total beam size in the vertical direction is roughly 70% bigger than in the horizontal direction. This is caused by the fact that the vertical KB mirror has a smaller focal length and therefore sits closer to the focus (500 mm vs 900 mm), leading to a higher divergence and hence a larger vertical size on the detector.
Aberrations in the wavefront of the focused beam will result in a distortion of the fringe pattern. Standard Fourier transform and phase unwrapping methods25 are used to calculate the phase of the Ronchigrams. However, the retrieved phase is not the phase of the X-ray beam itself, but the phase difference between the beam and a shifted copy of itself. The Ronchi test is a shearing interferometer and quantitative analysis requires one to invert the shearing operator. Shearing interferometry is well described in the literature and many methods exist to analyse the interferograms26,27,28,29,30. The main difficulty stems from the fact that the shearing operator has a kernel, and therefore cannot be inverted mathematically. Indeed, if we define the shearing operator in the x-direction as:
$${\hat{S}}_{x}[f(x,y)]=f(x,y)-f(x-{s}_{x},y)$$
we see that any periodic function in x with a period s x gets mapped onto the null vector. Therefore, trying to determine the beam that gives rise to the Ronchigrams is a mathematically ill-posed problem31. The most common way to resolve this issue is by using standard regularization theory32. In this paper we will follow that road, and adapt an algorithm described in Servin et al.33, in which two orthogonal interferograms are used in combination with an a priori assumption of smoothness of the wavefront. However, an additional difficulty with the Ronchigrams is that the shear is half the beam-size which is large in comparison to typical shearograms. This can result in a large area of the beam where no shearing information is available. For example, in Fig. 2(a) we have no shearing information and therefore no information on the phase in the white shaded area, which is approximately a quarter of the total beam aperture. Furthermore, the area where we do see fringes only yields information on the phase differences in one direction: we do not have any information on phase changes in the beam parallel to the fringes. This problem can be overcome by using enough Ronchigrams to ensure that every area where there is appreciable beam intensity is sheared along at least two angles. In the results presented here, we use the three Ronchigrams shown in Fig. 2(a–c). We only use the interference between the fundamental and the +1 order (i.e. right side of the image) since the interference with the −1 yields exactly the same phase information. Together, the three Ronchigrams contain enough information to retrieve the phase of the whole beam since they ensure we have shearograms in at least two direction in almost the entire aperture. An added bonus is that the three shearograms effectively shear both the horizontal and vertical directions with two incommensurate shear values and since Jacobi has shown that a double periodic function with incommensurate periods is necessarily constant34, this removes nearly the whole kernel. Common to interferometric methods, the implicit assumption is made that the beam is spatially coherent, and hence only the coherent part of the beam will be measured. A full description of the inversion algorithm can be found in the methods section of this paper.
To validate the inversion, we have sheared the recovered wavefront at angles −37.9°, −15.4° and 29.6° and compared them to the measured phase of the Ronchigrams. The result is shown in Fig. 2(d–f). We find an RMS error of less than \(\tfrac{1}{40}\) of a wavelength for each Ronchigram, ensuring that the inversion algorithm works accurately. We note that the use of three Ronchigrams over-constrains the optimization problem. Indeed, any two Ronchigrams with different shearing directions can always be used to invert the shearing operator with a vanishing error. However, this is not the case when three or more shearograms are used: only those derived from a physical field will result in an illumination phase that yields a small RMS error between the measured shearograms and the calculated ones. This is important since we use three shearograms from three different FEL pulses, and we make the implicit assumption that the phase of these pulses are the same. The fact that such a small RMS error is found validates this assumption and the method in general.
The recovered phase can be seen in Fig. 3(a), while Fig. 3(b) shows the measured intensity of the beam. Using this phase and measured intensity profile, we can calculate the X-ray profile at focus, which is shown in Fig. 3(c–e). The full width at half maximum of focal size of the central peak is 167 nm in the vertical and 123 nm in the horizontal. The uncertainty in the wavefront measurement of λ/40 mentioned above results in an uncertainty of approximately 2% in the peak intensity of the beam, applying the Strehl Ratio/Ruze formula35,36.
(a) Wavefront of the x-ray beam, (b) Measured intensity of the beam. (c) Intensity of the focal spot at best focus. (d) Vertical lineout (red) and (e) horizontal lineout (blue) through focus, compared with the theoretical ideal focus (dashed) when no abberations would be present. The peak intensity is 3.9 1019 W/cm2 for the 3 mJ beam energy and 60 fs pulse length that was used in the experiment.
In order to demonstrate the value of the information gained from the Ronchigram wavefront retrieval, a simulation of the aberrations due to misalignment of the KB mirror pair was performed. In the simulation, the vertical focusing mirror (VFM) was misaligned by 9 μrad and the horizontal focusing mirror (HFM) was misaligned by 7.5 μrad, resulting in a best focus 2 mm upstream of the nominal focus position. Horizontal and vertical lineouts are shown in Fig. 4, along with lineouts corresponding to operation of the system with the mirrors aligned to the design angle. From comparison with Fig. 3(d,e), it is clear that the majority of the observed vertical aberrations are captured in the simulated misalignment of the VFM. However, the horizontal direction is strongly affected by aberrations caused by two horizontal steering mirrors 330 m upstream of the KB pair. While an attempt was made to account for these aberrations in the simulation, the Ronchigram result suggests that there are additional aberrations that were not fully captured in the simulation. The analysis presented here underscores the importance of techniques which retrieve both the amplitude and phase of the focus. Other techniques, such as the use of imprints, rely on the assumption that the interaction plane has been chosen correctly. As can be seen in Fig. 4, a small error of 2 mm along the beam axis can result in a peak intensity that is a factor of two below what can be achieved with ideal alignment.
(a) Simulation of the focus of the focal spots of a misaligned KB-pair. The vertical focusing mirror was misaligned by 9 μrad and the horizontal focusing mirror by 7.5 μrad. (b) Vertical lineout of the spot in (a), compared with the spot from a perfectly aligned mirror. (c) Horizontal lineout.
In conclusion, we have presented a method to determine the focus of Free electron laser using Ronchi shearing interferometry. The method is fast, in situ and does not require high beam pointing stability. The method has been applied to the nanofocus of the CXI beamline at LCLS, and is readily applicable to other X-ray beamlines and focal sizes.
Ronchigrams
A Ronchi grating with a duty cycle of 1 and with the orientation of the grating in the x direction can be described as:
where \(\bigstar \) denotes the convolution operator, T 1 and T 2 are the complex transmission functions the two parts of the grating, the Dirac comb is defined as
and the rectangle function defined as
$${{\rm{rect}}}_{L}(x)=\{\begin{array}{ll}1 & {\rm{if}}\,-L\mathrm{/2} < x < L\mathrm{/2}\,\\ 0 & {\rm{elsewhere}}\,\end{array}$$
Taking the two dimensional Fourier transform of the Ronchi grating we get:
$$ {\mathcal R} ({k}_{x},{k}_{y})=2\pi \,\sum _{l=-\infty }^{+\infty }\,{R}_{l}\delta ({k}_{x}-{k}_{l})\,\delta ({k}_{y})$$
$${k}_{l}=\frac{2\pi }{D}l\,{\rm{with}}\,l\in {\mathbb{Z}}$$
$${R}_{l}=\{\begin{array}{ll}({T}_{1}+{T}_{2})\pi & {\rm{if}}\,l=0\\ ({T}_{2}-{T}_{1})\pi \,\sin {\rm{c}}(\frac{l\pi }{2}) & {\rm{if}}\,l\in {{\mathbb{Z}}}_{0}\end{array}$$
with sinc(x) = sin (x)/x and using the the two-dimensional Fourier transform defined as:
$$F({k}_{x},{k}_{y})= {\mathcal F} [f(x,y)]={\int }_{-\infty }^{\infty }\,f(x,y){e}^{-i({k}_{x}x+{k}_{y}y)}dx$$
The Ronchi grating and its Fourier transform can be seen in Fig. 5. We now consider the X-ray beam with a focus located at z = 0, with electric field E 0(x, y, 0). Using the paraxial approximation, we can propagate the electric field to the camera position, z c , using the Fresnel intergral:
$${E}_{c0}(x,y)={P}_{{z}_{c}}(x,y)\,{E}_{c0}^{F}(x,y)$$
$${E}_{c0}^{F}(x,y)=-\frac{ik}{2\pi {z}_{c}}{e}^{ik{z}_{c}} {\mathcal F} \,{[{E}_{0}(x,y){P}_{{z}_{c}}(x,y)]}_{\begin{array}{c}{k}_{x}=\tfrac{k\cdot x}{{z}_{c}}\\ {k}_{y}=\tfrac{k\cdot y}{{z}_{c}}\end{array}}$$
with k the wavenumber of the electromagnetic field and the spherical phase factor \({P}_{{z}_{c}}\) defined as:
$${P}_{{z}_{c}}(x,y)=\exp \,(\frac{ik}{2{z}_{c}}({x}^{2}+{y}^{2}))$$
Basically, \({E}_{c0}^{F}\) is the electric field at the camera position without the spherical wavefront curvature due to the propagation distance z c . We now place the Ronchi grating at position z = z 1, and propagate the beam from focus to the grating using the Fresnel integral. We multiply the field with the transmission function of the Ronchi grating and then propagate the field back to the (now virtual) focus of the beam. We get the resulting (virtual) field E R0:
$${E}_{R0}=\frac{1}{2\pi }\,\sum _{l=-\infty }^{+\infty }\,{R}_{l}\,\exp \,(i\frac{{k}_{l}{X}_{l}}{2})\,{E}_{0}(x+{X}_{l},y,\mathrm{0)}\,\exp \,(i{k}_{l}x)$$
$${X}_{l}=\frac{2\pi {z}_{1}}{kD}l\,{\rm{with}}\,l\in {\mathbb{Z}}$$
We use the Fresnel integral to propagate this field to z = z c . Substituting equation (11) we get:
$${E}_{Rc}=\frac{1}{2\pi }{P}_{{z}_{c}}(x,y)\,\sum _{l=-\infty }^{+\infty }\,{R}_{l}{e}^{i{\phi }_{l}}\,{E}_{c0}^{F}(x-{X}_{D}^{l},y,{z}_{c})\,\exp \,(i{k}_{l}\frac{{z}_{1}}{{z}_{c}}x)$$
$${\phi }_{l}=\frac{{k}_{l}{X}_{l}}{2}\,(\frac{{z}_{1}}{{z}_{c}}-1)$$
$${X}_{D}^{l}=\frac{{k}_{l}}{k}({z}_{c}-{z}_{1})$$
When different orders (i.e. different values of l) overlap, they will interfere, and form predominant linear fringes due to the linear phase in x. For our ronchi test, we choose D to have the half-beam overlap as shown in Fig. 2. From equation (15) we can calculate the phase difference between the zeroth and first order:
$${\rm{\Delta }}\phi ={\phi }_{{R}_{0}}-{\phi }_{{R}_{1}}-\frac{2{\pi }^{2}{z}_{1}}{k{D}^{2}}\,(\frac{{z}_{1}}{{z}_{c}}-1)+{\hat{S}}_{{X}_{D}}[{\phi }_{c0}(x,y)]-\frac{2\pi {z}_{1}}{{z}_{c}D}x$$
with \({\phi }_{{R}_{0}}\) the phase of R 0, \({\phi }_{{R}_{1}}\) the phase of R 1, ϕ c0(x, y) the phase of \({E}_{c0}^{F}(x,y,{z}_{c})\), and \({\hat{S}}_{{X}_{D}}\) the shearing operator defined as:
$${\hat{S}}_{{X}_{D}}[f(x,y)]=f(x,y)-f(x-{X}_{D},y)$$
and \({X}_{D}\equiv {X}_{D}^{1}\). The first three terms give the constant phase difference between the orders (corresponding to the undetermined C x in equation (1). This constant phase is a priory unknown, since we cannot know the exact position of the beam with respect to the grating, and a shift of δx in this position will lead to a constant phase of \(2\pi \tfrac{\delta x}{D}\). The value of this constant phase is actually important during the shear-inversion, and will need to be optimized together with the rest of the wavefront. It could also be used to measure beam jitter, if it is not larger than the grating period. The last two terms show how the phase varies in x and y. The linear phase in x of the last term will result in linear fringes in the intensity of E RC , provided the spatial frequency \(\frac{2\pi {z}_{1}}{{z}_{c}D}\) is large enough. Using standard Fourier methods25 and phase unwrapping algorithms we can retrieve \({\hat{S}}_{{X}_{D}}[{\phi }_{c0}(x,y)]\) as long as the spatial frequency \(\frac{2\pi {z}_{1}}{{z}_{c}D}\) is at least twice the highest spatial frequency that is present in the intensity of E RC ; otherwise aliasing will occur. To retrieve the phase of the electric field at the camera location ϕ co (x, y), we will have to invert the shearing operator \({\hat{S}}_{{X}_{D}}\). Measuring the intensity of E c0 is trivially done without Ronchi grating. Therefore, we will have full information of both phase and amplitude of the electric field at the camera location, which allows us to propagate the beam to any z location, and therefore fully determine its focal characteristics.
The Ronchi target (left) and its 2D Fourier transform. T 1 and T 2 are complex transmission functions. The dots in the Fourier transform plot signify delta-distributions.
Inverting the shearing operator
As shown above, the phase retrieved from the Ronchigrams is not the actual phase of the X-ray beam, but the sheared phase. In general we have:
$${\hat{S}}_{\bar{s}}[\phi (x,y)]=\phi (x,y)-\phi (x-{s}_{x},y-{s}_{y})$$
with the shear vector \(\bar{s}=({s}_{x},{s}_{y})\) orthogonal to the lines of the Ronchi grating. The measured data will be sampled in x and y, and defining ϕ i,j as the phase at the sample points (i.e., pixels), we have the corresponding discrete operator
$${\hat{S}}_{\bar{s}}[{\phi }_{i,j}]={\phi }_{i,j}-{\phi }_{i-{s}_{x},j-{s}_{y}}$$
with s x and s y expressed in number of pixels. We call \({\phi }_{i,j}^{\bar{s}}\) the measured sheared phase retrieved from the Ronchigram. We now search for a solution of ϕ i,j that minimizes the cost function of the mean-square error between the sheared phase and the measured one:
$${U}_{\bar{s}}^{2}=\sum _{i,j}\,{({\hat{S}}_{\bar{s}}[{\phi }_{i,j}]-{\phi }_{i,j}^{\bar{s}}+{\phi }_{c}^{\bar{s}})}^{2}\,{P}_{i,j}^{\bar{s}}$$
with \({\phi }_{c}^{\bar{s}}\) the constant phase that is jitter dependent mentioned above and \({P}_{i,j}^{\bar{s}}\) the masking function that is equal to 1 where good sheared data is available and 0 where there isn't. As mentioned above, we need multiple Ronchigrams to recover the phase, due to the limited overlap between the beams after shearing. Therefore the total cost function will be the sum for different values of the sheared direction \(\bar{s}\):
$${U}^{2}=\sum _{\bar{s}}{U}_{\bar{s}}^{2}$$
In the reconstruction that is shown in the main body of this paper we use three values of \(\bar{s}\), corresponding to shears at −37.9°, −15.4° and 29.6°, but in principle we could use more shearograms and reduce the error. As in Servin et al.33 we will add a cost function that corresponds to our a priori assumption of smoothness in x and y:
$${R}_{x}^{2}=\sum _{i,j}\,{({\phi }_{i-\mathrm{1,}j}-2{\phi }_{i,j}+{\phi }_{i+\mathrm{1,}j})}^{2}\,{P}_{i-\mathrm{1,}j}\,{P}_{i+\mathrm{1,}j}$$
$${R}_{y}^{2}=\sum _{i,j}\,{({\phi }_{i,j-1}-2{\phi }_{i,j}+{\phi }_{i,j+1})}^{2}\,{P}_{i,j-1}\,{P}_{i,j+1}$$
where P i,j is masking function equal to 1 inside the aperture of the beam, and 0 outside it. Note that in principle we could use
$${P}_{i,j}^{\bar{s}}={P}_{i,j}\,{P}_{i-{s}_{x},j-{s}_{y}}$$
although in practise we may need to take the mask slightly smaller. Alternatively, we could allow values in the masking function between 0 and 1 to allow for a weighted average in in the cost function. With the regularization, the total cost function becomes:
$${U}^{2}=\sum _{\bar{s}}\,{U}_{\bar{s}}^{2}+\eta \,({R}_{x}^{2}+{R}_{y}^{2})$$
with η the regularization parameter. Efficient minimization of the cost function requires the partial derivatives toward ϕ i,j and \({\phi }_{c}^{\bar{s}}\):
$$\frac{\partial {U}^{2}}{\partial {\phi }_{k,l}}=\sum _{\bar{s}}\,\frac{\partial {U}_{\bar{s}}^{2}}{\partial {\phi }_{k,l}}+\eta \,(\frac{\partial {R}_{x}^{2}}{\partial {\phi }_{k,l}}+\frac{\partial {R}_{y}^{2}}{\partial {\phi }_{k,l}})$$
$$\frac{\partial {U}^{2}}{\partial {\phi }_{c}^{\bar{s}}}=\frac{\partial {U}_{\bar{s}}^{2}}{\partial {\phi }_{c}^{\bar{s}}}$$
$$\begin{array}{rcl}\frac{\partial {U}_{\bar{s}}^{2}}{\partial {\phi }_{k,l}} & = & 2{P}_{k,l}^{\bar{s}}({\hat{S}}_{\bar{s}}[{\phi }_{k,l}]-{\phi }_{k,l}^{\bar{s}}+{\phi }_{c}^{\bar{s}})\\ & & -2{P}_{k+{s}_{x},l+{s}_{y}}^{\bar{s}}({\hat{S}}_{\bar{s}}[{\phi }_{k+{s}_{x},l+{s}_{y}}]-{\phi }_{k+{s}_{x},l+{s}_{y}}^{\bar{s}}+{\phi }_{c}^{\bar{s}})\end{array}$$
$$\frac{\partial {U}_{\bar{s}}^{2}}{\partial {\phi }_{c}^{\bar{s}}}=2\,\sum _{k,l}\,{P}_{k,l}^{\bar{s}}({\hat{S}}_{\bar{s}}[{\phi }_{k,l}]-{\phi }_{k,l}^{\bar{s}}+{\phi }_{c}^{\bar{s}})$$
$$\begin{array}{rcl}\frac{\partial {R}_{x}^{2}}{\partial {\phi }_{k,l}} & = & 2({\phi }_{k,l}-2{\phi }_{k+\mathrm{1,}l}+{\phi }_{k+\mathrm{2,}l})\,{P}_{k,l}\,{P}_{k+\mathrm{2,}l}\\ & & -\,4({\phi }_{k-\mathrm{1,}l}-2{\phi }_{k,l}+{\phi }_{k+\mathrm{1,}l})\,{P}_{k-\mathrm{1,}l}\,{P}_{k+\mathrm{1,}l}\\ & & +\,2({\phi }_{k-\mathrm{2,}l}-2{\phi }_{k-\mathrm{1,}l}+{\phi }_{k,l})\,{P}_{k-\mathrm{2,}l}\,{P}_{k,l}\end{array}$$
$$\begin{array}{rcl}\frac{\partial {R}_{y}^{2}}{\partial {\phi }_{k,l}} & = & 2({\phi }_{k,l}-2{\phi }_{k,l+1}+{\phi }_{k,l+2})\,{P}_{k,l}\,{P}_{k,l+2}\\ & & -\,4({\phi }_{k,l-1}-2{\phi }_{k,l}+{\phi }_{k,l+1})\,{P}_{k,l-1}\,{P}_{k,l+1}\\ & & +\,2({\phi }_{k,l-2}-2{\phi }_{k,l-1}+{\phi }_{k,l})\,{P}_{k,l-2}\,{P}_{k,l}\end{array}$$
We can now minimize equation (27) using a conjugate gradient descent method; alternatively a limited memory Broyden-Fletcher-Goldfard-Shanno algorithm37,38,39,40,41 runs very fast.
Ackermann, W. et al. Operation of a free-electron laser from the extreme ultraviolet to the water window. Nature Photon. 1, 336 (2007).
Allaria, E. et al. The FERMI free-electron lasers. J. Synchrotron Rad. 22, 485–491 (2015).
Emma, P. et al. First lasing and operation of an å ngstrom-wavelength free-electron laser. Nature Photon. 4, 641–647 (2010).
Ishikawa, T. et al. A compact X-ray free-electron laser emitting in the sub-angstrom region. Nature Photon. 6, 540–544 (2012).
Tschentscher, T. et al. Photon beam transport and scientific instruments at the european xfel. Applied Sciences 7, 592 (2017).
Bostedt, C. et al. Linac coherent light source: The first five years. Rev. Mod. Phys. 88, 015007 (2016).
Schlichting, I., White, W. E. & Yabashi, M. An introduction to the special issue on X-ray free-electron lasers. J. Synchrotron Rad. 22, 471 (2015).
Seibert, M. M. et al. Single mimivirus particles intercepted and imaged with an x-ray laser. Nature 470, 78–81 (2011).
Chalupský, J. et al. Imprinting a focused x-ray laser beam to measure its full spatial characteristics. Phys. Rev. Applied 4, 014004 (2015).
Schropp, A. et al. Full spatial characterization of a nanofocused x-ray free-electron laser beam by ptychographic imaging. Scientific reports 3, 1633 (2013).
Kayser, Y. et al. Wavefront metrology measurements at sacla by means of x-ray grating interferometry. Opt. Express 22, 9004–9015 (2014).
Weitkamp, T., Nhammer, B., Diaz, A., David, C. & Ziegler, E. X-ray wavefront analysis and optics characterization with a grating interferometer. Applied Physics Letters 86, 054101 (2005).
Assoufid, L. et al. Development and implementation of a portable grating interferometer system as a standard tool for testing optics at the advanced photon source beamline 1-bm. Review of Scientific Instruments 87, 052004 (2016).
Ronchi, V. Forty years of history of a grating interferometer. Appl. Opt. 3, 437–451 (1964).
Uhlén, F. et al. Ronchi test for characterization of X-ray nanofocusing optics and beamlines. J. Synchrotron Rad. 21, 1105–1109 (2014).
Nilsson, D. et al. Ronchi test for characterization of nanofocusing optics at a hard x-ray free-electron laser. Opt. Lett. 37, 5046–5048 (2012).
Liang, M. et al. The coherent x-ray imaging instrument at the linac coherent light source. J. Synchrotron Rad. 22, 514–519 (2015).
Boutet, S. & Williams, G. J. The coherent x-ray imaging (cxi) instrument at the linac coherent light source (lcls). New Journal of Physics 12, 035024 (2010).
Siewert, F. et al. Ultra-precise characterization of lcls hard xray focusing mirrors by high resolution slope measuring deflectometry. Opt. Express 20, 4525–4536 (2012).
Soufli, R. et al. Morphology, microstructure, stress and damage properties of thin film coatings for the lcls x-ray mirrors morphology, microstructure, stress and damage properties of thin film coatings for the lcls x-ray mirrors. In Juha, L., Bajt, S. & Sobierajski, R. (eds) Damage to VUV, EUV, and X-Ray Optics II, vol. 7361 of Damage to VUV, EUV, and X-Ray Optics II, 73610U–1. SPIE (Proc. of SPIE, 2009).
Barty, A. et al. Predicting the coherent x-ray wavefront focal properties at the linac coherent light source (lcls) x-ray free electron laser. Opt. Express 17, 15508–15519 (2009).
Hoszowska, J. et al. X-ray two-photon absorption with high fluence xfel pulses. In XXIX International Conference on Photonic, Electronic, and Atomic Collisions (ICPEAC2015), vol. 635 of Journal of Physics: Conference Series, 102009 (IOP Publishing, 2015).
Fuchs, M. et al. Anomalous nonlinear x-ray compton scattering. Nature Phys. 11, 964–971 (2015).
Nass, K. et al. Indications of radiation damage in ferredoxin microcrystals using high-intensity x-fel beams. J. Synchrotron Rad. 22, 225–238 (2015).
Takeda, M., Ina, H. & Kobayashi, S. Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry. J. Opt. Soc. Am. 72, 156–160 (1982).
Elster, C. Exact two-dimensional wave-front reconstruction from lateral shearing interferograms with large shears. Appl. Opt. 39, 5353–5359 (2000).
Elster, C. & Weingärtner, I. Solution to the shearing problem. Appl. Opt. 38, 5024–5031 (1999).
Elster, C. Recovering wavefronts from difference measurements in lateral shearing interferometry. Journal of Computational and Applied Mathematics 110, 177–180 (1999).
Elster, C. & Weingärtner, I. Exact wave-front reconstruction from two lateral shearing interferograms. J. Opt. Soc. Am. A 16, 2281–2285 (1999).
Liang, P., Ding, J., Jin, Z., Guo, C.-S. & tian Wang, H. Two-dimensional wave-front reconstruction from lateral shearing interferograms. Opt. Express 14, 625–634 (2006).
Hadamar, J. Sur les problèmes aux dérivées partielles et leur signification physique. Princeton University Bulletin 13, 49–52 (1902).
Tikhonov, A. N. Solution of incorrectly formulated problems and the regularization method. Sov. Math. Dokl. 4, 1035–1038 (1963).
Servin, M., Malacara, D. & Marroquin, J. L. Wave-front recovery from two orthogonal sheared interferometers. Appl. Optics 35, 4343–4348 (1996).
Jacobi, C. G. J. De functionibus duarum variabilium quadrupliciter periodicis, quibus theoria transcendentium abelianarum innitur. J. für Math. 13, 55–78 (1835).
Mahajan, V. N. Strehl ratio for primary aberrations in terms of their aberration variance. J. Opt. Soc. Am. 73, 860–861 (1983).
Ruze, J. The effect of aperture errors on the antenna radiation pattern. Il Nuovo Cimento 9, 364–380 (1952).
Broyden, C. G. The convergence of a class of double-rank minimization algorithms 1. general considerations. IMA Journal of Applied Mathematics 6, 76–90 (1970).
Fletcher, R. A new approach to variable metric algorithms. The Computer Journal 13, 317–322 (1970).
Goldfarb, D. A family of variable-metric methods derived by variational means. Math. Comp. 24, 23–26 (1970).
Shanno, D. F. Conditioning of quasi-newton methods for function minimization. Math.Comp. 24, 647–656 (1970).
Nocedal, J. Updating quasi-newton matrices with limited storage. Math. Comp. 35, 773–782 (1980).
The authors would like to thank Ulrich Vogt, Daniel Nilsson and Hans Hertz for introducing us to the Ronchi testing technique. Parts of this work were funded by Volkswagen Foundation, the DFG under grant SCHR 1137/1-1, and by the German Ministry of Education and Research (BMBF) under grant number 05K13OD2. A.S. and Y.WL. are grateful for the support by the US Department of Energy, Office of Science, Basic Energy Sciences, Early Career Award. Use of the Linac Coherent Light Source (LCLS), SLAC National Accelerator Laboratory, is supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences under Contract No. DE-AC02-76SF00515. The MEC instrument is supported by the US Department of Energy, Office of Science, Office of Fusion Energy Sciences under contract No. SF00515.
SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA, 94025, USA
Bob Nagler
, Andrew Aquila
, Sébastien Boutet
, Eric C. Galtier
, Akel Hashim
, Mark S. Hunter
, Mengning Liang
, Anne E. Sakdinawat
, Matthew H. Seaberg
, Frank Seiboth
, Tim van Driel
, Zhou Xing
, Yanwei Liu
& Hae Ja Lee
Deutsches Elektronen-Synchrotron (DESY), Notkestrasse 85, D-22607, Hamburg, Germany
Christian G. Schroer
, Andreas Schropp
& Frank Seiboth
Department Physik, Universität Hamburg, Luruper Chaussee 149, D-22761, Hamburg, Germany
Search for Bob Nagler in:
Search for Andrew Aquila in:
Search for Sébastien Boutet in:
Search for Eric C. Galtier in:
Search for Akel Hashim in:
Search for Mark S. Hunter in:
Search for Mengning Liang in:
Search for Anne E. Sakdinawat in:
Search for Christian G. Schroer in:
Search for Andreas Schropp in:
Search for Matthew H. Seaberg in:
Search for Frank Seiboth in:
Search for Tim van Driel in:
Search for Zhou Xing in:
Search for Yanwei Liu in:
Search for Hae Ja Lee in:
B.N. designed the experiment and analysed the data. A.A., S.B., E.G., M.H., M.L., T.D., H.L., A.H., C.S., A.S., F.S., B.N. performed ronchi experiments. A.S. and Y.L. manufactured the gratings, M.S. performed the mirror simulations, B.N. wrote the paper, all authors read, commented on and reviewed the manuscript.
Correspondence to Bob Nagler.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Nagler, B., Aquila, A., Boutet, S. et al. Focal Spot and Wavefront Sensing of an X-Ray Free Electron laser using Ronchi shearing interferometry. Sci Rep 7, 13698 (2017) doi:10.1038/s41598-017-13710-8
DOI: https://doi.org/10.1038/s41598-017-13710-8
Characterizing the intrinsic properties of individual XFEL pulses via single-particle diffraction
Heemin Lee
, Jaeyong Shin
, Do Hyung Cho
, Chulho Jung
, Daeho Sung
, Kangwoo Ahn
, Daewoong Nam
, Sangsoo Kim
, Kyung Sook Kim
, Sang-Yeon Park
, Jiadong Fan
, Huaidong Jiang
, Hyun Chol Kang
, Kensuke Tono
, Makina Yabashi
, Tetsuya Ishikawa
, Do Young Noh
& Changyong Song
Journal of Synchrotron Radiation (2020)
Sixth user workshop on high-power lasers at the linac coherent light source
Cindy Bolme
, Gilliss Dyer
& Siegfried Glenzer
Powder Diffraction (2019)
xcalib: a focal spot calibrator for intense X-ray free-electron laser pulses based on the charge state distributions of light atoms
Koudai Toyota
, Zoltan Jurek
, Sang-Kil Son
, Hironobu Fukuzawa
, Kiyoshi Ueda
, Nora Berrah
, Benedikt Rudek
, Daniel Rolles
, Artem Rudenko
& Robin Santra
Stimulated X-Ray Emission Spectroscopy in Transition Metal Complexes
Thomas Kroll
, Clemens Weninger
, Roberto Alonso-Mori
, Dimosthenis Sokaras
, Diling Zhu
, Laurent Mercadier
, Vinay P. Majety
, Agostino Marinelli
, Alberto Lutman
, Marc W. Guetg
, Franz-Josef Decker
, Andy Aquila
, Jason Koglin
, Jake Koralek
, Daniel P. DePonte
, Jan Kern
, Franklin D. Fuller
, Ernest Pastor
, Thomas Fransson
, Yu Zhang
, Junko Yano
, Vittal K. Yachandra
, Nina Rohringer
& Uwe Bergmann
Physical Review Letters (2018)
Relativistic and resonant effects in the ionization of heavy atoms by ultra-intense hard X-rays
Benedikt Rudek
, Koudai Toyota
, Lutz Foucar
, Benjamin Erk
, Rebecca Boll
, Cédric Bomme
, Jonathan Correa
, Sebastian Carron
, Garth J. Williams
, Ken R. Ferguson
, Jason E. Koglin
, Tais Gorkhover
, Maximilian Bucher
, Carl Stefan Lehmann
, Bertold Krässig
, Stephen H. Southworth
, Linda Young
, Christoph Bostedt
, Tatiana Marchenko
, Marc Simon
, Robin Santra
& Daniel Rolles
Nature Communications (2018)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Scientific Reports menu
About Scientific Reports
Guest Edited Collections
Scientific Reports Top 100 2017
Scientific Reports Top 10 2018
Editorial Board Highlights | CommonCrawl |
Non First Normal Form
2pk Non-Wired First Bras AA-D - Angel - M&
Unnormalized form (UNF), also known as an unnormalized relation or non first normal form (NF 2), is a simple database data model (organization of data in a database) lacking the efficiency of database normalization.An unnormalized data model will suffer the pitfalls of data redundancy, where multiple values and/or complex data structures may be stored within a single field or attribute, or. Die Non-First-Normal-Form . Das NF²-Datenmodell nach SCHEK/PISTOR kombiniert das einfache relationale Datenmodell mit dem hierarchischen Datenmodell. Es erlaubt die Definition von beliebig tief geschachtelten Strukturen, wodurch hierarchische Beziehungen zwischen Datenobjekten besonders einfach modelliert werden können. Diese hierarchische Struktur macht es sehr leicht, einen Überblick.
Unnormalized form - Wikipedi
Befindet sich ein Relationenschema nicht in der 1NF, so nennt man diese Form auch Non-First-Normal-Form (NF²) oder Unnormalisierte Form (UNF). Der Prozess der Normalisierung und Zerlegung einer Relation in die 1NF, 2NF und 3NF muss die Wiederherstellbarkeit der ursprünglichen Relation erhalten, das heißt die Zerlegung muss verbundtreu und abhängigkeitstreu sein
First normal form (1NF) is a property of a relation in a relational database.A relation is in first normal form if and only if the domain of each attribute contains only atomic (indivisible) values, and the value of each attribute contains only a single value from that domain. The first definition of the term, in a 1971 conference paper by Edgar Codd, defined a relation to be in first normal.
The first normal form (1NF) states that each attribute in the relation is atomic. The second normal form (2NF) states that non-prime attributes must be functionally dependent on the entire candidate key. The third normal form (3NF) states that non-prime attributes must be directly (non-transitively) dependent on candidate keys
NFNF - Non-First Normal Form. Looking for abbreviations of NFNF? It is Non-First Normal Form. Non-First Normal Form listed as NFN
Before understanding the normal forms it is necessary to understand Functional dependency. A functional dependency defines the relationship between two attributes, typically between a prime attribute (primary key) and non-prime attributes
There are three sources of modification anomalies in SQL These are defined as first, second, and third normal forms (1NF, 2NF, 3NF). These normal forms act as remedies to modification anomalies. First normal form To be in first normal form (1NF), a table must have the following qualities: The table is two-dimensional with rows and [
1. First Normal Form - If a relation contain composite or multi-valued attribute, it violates first normal form or a relation is in first normal form if it does not contain any composite or multi-valued attribute. A relation is in first normal form if every attribute in that relation is singled valued attribute
2. Die Non-First-Normal-For
In this article, we will discuss First Normal Form (1NF). First Normal Form (1NF): If a relation contain composite or multi-valued attribute, it violates first normal form, or a relation is in first normal form if it does not contain any composite or multi-valued attribute. A relation is in first normal form if every attribute in that relation is singled valued attribute This table is not in first normal form because the [Color] column can contain multiple values. For example, the first row includes values red and green. To bring this table to first normal form, we split the table into two tables and now we have the resulting tables Chaotically generated data tables do not always correspond to first normal form (1NF). As you know, reducing the table to the first normal form is a necessary condition for normalization. More details about the need to apply normalization in the database is described here. The table is considered converted to the first normal form (1NF) if the following conditions are met: all table values are. Oracle non first normal form table design When we use repeating groups, we cannot know in advance how many cells contain data. Therefore, we must test to see how many values are present. To test whether the act_score column is NULL we add the following special code to our exampl
Definition: A relation is said to be in First Normal Form ( 1NF) if and only if each attribute of the relation is atomic. More simply, to be in 1NF, each column must contain only a single value a. 一.范式概述( NF : NormalForm ) 数据库的设计范式是数据库设计所需要满足的规范,满足这些规范的数据库是简洁的、结构明晰的,同时,不会发生插入(insert)、删除(delete)和更新(update)操作异常。 4NF (Fourth Normal Form) Rules. If no database table instance contains two or more, independent and multivalued data describing the relevant entity, then it is in 4 th Normal Form. 5NF (Fifth Normal Form) Rules. A table is in 5 th Normal Form only if it is in 4NF and it cannot be decomposed into any number of smaller tables without loss of data The First normal form (1NF) sets basic rules for an organized database −. Define the data items required, because they become the columns in a table. Place the related data items in a table. Ensure that there are no repeating groups of data. Ensure that there is a primary key. First Rule of 1NF. You must define the data items. This means looking at the data to be stored, organizing the data into columns, defining what type of data each column contains and then finally putting the related.
First Normal Form (1NF): Data is stored in tables with rows uniquely identified by a primary key; Data within each table is stored in individual columns in its most reduced form; There are no repeating groups; Second Normal Form (2NF): Everything from 1NF; Only data that relates to a table's primary key is stored in each table; Third Normal Form (3NF) The third normal form is now satisfied. Conclusions. In this tutorial we talked about the first three normal forms of a relational database and how they are used to reduce data redundancy and avoid insertion, deletion and updation anomalies. We saw what are the prerequisites of each normal form, some examples of their violations, and how to fix them. Other normal forms exist past the third, however, in the most common applications, reaching the third normal form is enough to. Second Normal Form (2NF) For a table to be in the Second Normal Form, It should be in the First Normal form. And, it should not have Partial Dependency. To understand what is Partial Dependency and how to normalize a table to 2nd normal for, jump to the Second Normal Form tutorial
Normalisierung (Datenbank) - Wikipedi
Table is in 1NF (First normal form) No non-prime attribute is dependent on the proper subset of any candidate key of table. An attribute that is not part of any candidate key is known as non-prime attribute. Example: Suppose a school wants to store the data of teachers and the subjects they teach. They create a table that looks like this: Since a teacher can teach more than one subjects, the.
Table is in first normal form (1NF) No non-prime attribute is dependent on the proper subset of any candidate key of the table; 2nd normal form - In a nutshell. 3rd Normal Form. Third normal form: an entity type is in third normal form (3NF) when it is in 2NF and all non-primary fields are dependent on the primary key. 3rd Normal Form . Refers to the functional dependencies of attributes.
NON FIRST NORMAL FORM 393 [MW] D. MAIER, D. WARREN, Specifying Connections for a Universal Relation Scheme Database, pp. 1-7, Proc. SIGMOD, 1982. [Mak] A. MAKINOUCHI, A Consideration on Normal Form of Not-Necessarily-Normalized Relation in the Relational Data Model, pp. -453, Proc. Inter. ConL on VLDB, Tokyo, 1977. [P] P. PAUTHE, EVER, un editeur pour V-relations, The de Troisie cycle.
First Normal Form - The information is stored in a relational table with each column containing atomic values. There are no repeating groups of columns. Second Normal Form - The table is in first normal form and all the columns depend on the table's primary key What is the abbreviation for Non First Normal Form? Non First Normal Form is abbreviated as NFNF. Alternative Meanings 5 alternative NFNF meanings. NFNF - N-Female to N-Female; NFNF - No Friend, No Foe; NFNF - Neither Fish Nor Flesh; NFNF - Non-First Normal Form; NFNF - Nurses for Newborns Foundation; images . Abbreviation in images. Image info. Source HTML HTML with link. This work by. Looking for the abbreviation of Non-First Normal Form? Find out what is the most common shorthand of Non-First Normal Form on Abbreviations.com! The Web's largest and most authoritative acronyms and abbreviations resource
The First Normal Form is used to reduce the redundancy in the dataset. Hence, if the dataset contains multi-valued entries/attributes, the first normal form will reduce it to separate entries. Rules to follow when creating First Normal Form. Creating a first normal form has certain sets of rules which need to be followed. These rules are FIRST NORMAL FORM (1NF) : A relation schema R is in 1NF, if it does not have any composite attributes,multivalued atttribute or their combination. The objective of first normal form is that the table should contain no repeating groups of data.Data is divided into logical units called entities or tables All attributes (column) in the entity (table) must be single valued. Repeating or multi. The table in this example is in first normal form (1NF) since all attributes are single valued. But it is not yet in 2NF. If student 1 leaves university and the tuple is deleted, then we loose all information about professor Schmid, since this attribute is fully functional dependent on the primary key IDSt. To solve this problem, we must create a new table Professor with the attribute Professor (the name) and the key IDProf. The third table Grade is necessary for combining the two relations. Non first normal form relations to represent hierarchically organized data. Information systems. Data management systems. Database design and models. Comments. Login options. Check if you have access through your credentials or your institution to get full access on this article. Sign in. Full Access. Get this Publication. Information; Contributors; Published in. PODS '84: Proceedings of. Serge Abiteboul, Nicole Bidoit. Non first normal form relations:An algebra allowing data restructuring. [Research Report] RR-0347, INRIA. 1984. inria-00076210 archives-ouvertes . Title: Non first normal form relations:An algebra allowing data restructuring Author: Serge Abiteboul, Nicole Bidoit Subject: Computer Science [cs]/Other [cs.OH] Created Date: 4/28/2021 2:24:11 PM.
Non First Normal Form. Miscellaneous » Unclassified. Add to My List Edit this Entry Rate it: (2.00 / 1 vote) Translation Find a translation for Non First Normal Form in other languages: Select another language: - Select - 简体中文 (Chinese - Simplified) 繁體中文 (Chinese - Traditional) Español (Spanish) Esperanto (Esperanto) 日本語 (Japanese) Português (Portuguese) Deutsch. First Normal Form is concerned with the data structures, not the data itself. Based on four sample records we can't tell you whether your table satisfies 1NF or not. Does your table have a key, named and typed attributes, permit exactly one value per attribute in each tuple, no nulls or other special data, no column ordering or tuple ordering? If yes to all those things then it qualifies as. First Normal Form (1NF) does not eliminate redundancy, but rather, it's that it eliminates repeating groups. 2. Second Normal Form (2NF) : Second Normal Form (2NF) is based on the concept of full functional dependency. A relation that is not in 2NF may suffer from the update anomalies. To be in second normal form, a relation must be in first normal form and relation must not contain any. First normal form excludes variable repeating fields and groups. This is not so much a design guideline as a matter of definition. Relational database theory doesn't deal with records having a variable number of fields. 3 SECOND AND THIRD NORMAL FORMS . Second and third normal forms [2, 3, 7] deal with the relationship between non-key and key fields. Under second and third normal forms, a non. In the first normal form, information items have been put into their own columns; The second normal form introduces a unique value that describes each row, and only that row. Typically the unique identifier has nothing to do with the data in the table, it is usually a counter. In third normal form, the information within each table is not duplicated, and the tables are tied together by the.
First normal form - Wikipedi
The inventor of the relational model Edgar Codd proposed the theory of normalization with the introduction of First Normal Form, and he continued to extend theory with Second and Third Normal Form. Later he joined with Raymond F. Boyce to develop the theory of Boyce-Codd Normal Form. Theory of Data Normalization in SQL is still being developed further. For example, there are discussions even.
Definition of first normal form in the Definitions.net dictionary. Meaning of first normal form. What does first normal form mean? Information and translations of first normal form in the most comprehensive dictionary definitions resource on the web
Insertion anomalies are common in first normal form relations that are not also in any of the higher normal forms. In practical terms, they occur because there are data about more than one entity in the relation. The anomaly forces you to insert data about an unrelated entity (for example, a merchandise item) when you want to insert data about another entity (such as a customer). First normal.
e the following Entity and decide which rule of Normal Form is being violated: ENTITY: CLIENT ORDER ATTRIBUTES: # CLIENT ID # ORDER ID FIRST NAME LAST NAME ORDER DATE CITY ZIP CODE Mark for Review (1) Points 1st Normal Form. 2nd Normal Form. (*) 3rd Normal Form
In the first normal form, you can not just remove one of the values in any multi valued attribute. You can make another entry and take a composite primary key which will be removed in further normalization. Reply. Leave a Reply Cancel reply. Your email address will not be published. Required fields are marked * Name * Email * Website. This site uses Akismet to reduce spam. Learn how your.
Second Normal Form: An entity is in Second Normal Form (2NF) when it meets the requirement of being in First Normal Form (1NF) and additionally: Does not have a composite primary key. Meaning that the primary key can not be subdivided into separate logical entities. All the non-key columns are functionally dependent on the entire primary key The concept of normalization was first proposed by Edgar F. Codd in 1970, when he proposed the first normal form (1NF) in his paper A Relational Model of Data for Large Shared Data Banks (this is the paper in which he introduced the whole idea of relational databases). Codd continued his work on normalization and defined the second normal form (2NF) and third normal form (3NF) in 1971. Codd. When discussing the normalisation process, it is always the First Normal Form that causes the most grief and confusion. Anith Sen takes up the challenge to explain, in simple terms, exactly what the First Normal Form really is, and why it is so important for Database Design. Along the way, he dispels some of the myths that have grown up around 1NF Third Normal Form Rule. The rule of Third Normal Form (3NF) states that no non-UID attribute can be dependent on another non-UID attribute. Third Normal Form prohibits transitive dependencies. A transitive dependency exists when any attribute in an entity is dependent on any other non-UID attribute in that entity
Video: Normalization in Relational Databases: First Normal Form
Non-First Normal Form - How is Non-First Normal Form
ates redundancy, but rather, it's that it eli
First Normal Form. First Normal Form is defined in the definition of relations (tables) itself. This rule defines that all the attributes in a relation must have atomic domains. The values in an atomic domain are indivisible units. We re-arrange the relation (table) as below, to convert it to First Normal Form. Each attribute must contain only a single value from its pre-defined domain. Second.
First normal form: No repeating groups. Tables should have only two dimensions. Since one student has several classes, these classes should be listed in a separate table. Fields Class1, Class2, and Class3 in the above records are indications of design trouble. Spreadsheets often use the third dimension, but tables should not. Another way to look at this problem is with a one-to-many.
A table in a relational database complies with the first normal form (1NF) when it fulfills the following criteria: All data is atomic; All table columns contain identical values; A data set is considered atomic if each item of information is assigned to a separate data field. In the below table of billing data, all value ranges that are either non-atomic or don't contain equivalent data.
ate repeating groups in individual tables. - Create a separate table for each set of related data. -Identify each set of related data with a primary key. --First normal form, Wikipedia Note: A relational databases consists of relations that can be visualized as R-tables. Normal forms are a property of relations, not R-tables -- a R-table in.
If this is the way our data is modeled, it's not in first normal form. Multiple Columns of the Same Thing. Perhaps we've learned our lesson and we don't want groups of data in a single column. Therefore, we break the order items up into multiple columns, say 2, because we've never had an order consisting of more than 2 different items. The code that would create such a construct is this: -- A.
Normal Forms in DBMS Types of Normal Forms with Example
Weblio 辞書 > 英和辞典・和英辞典 > non first normal form の意味・解説 > non first normal formに関連した英語例文. 例文検索の条件設定 「カテゴリ」「情報源」を複数指定しての検索が可能になりました。( プレミアム会員 限定) カテゴリ: ビジネス (0) 法律 (0) 金融 (0) コンピュータ・IT (0) 日常 (0.
The first normal form expects you to follow a few simple rules while designing your database, and they are: Rule 1: Single Valued Attributes. Each column of your table should be single valued which means they should not contain multiple values. We will explain this with help of an example later, let's see the other rules for now. Rule 2: Attribute Domain should not change. This is more of a.
First Normal Form (1NF) A table is in first normal form if it contains no repeating groups. It means A relation in which the intersection of each row and column and contains one and only one value is said to be in first normal form. That is, it is stated that the domain of an attribute must include only atomic values. A domain is atomic if elements of the domain are considered to be.
First Normal Form (1NF) A relation will be 1NF if it contains an atomic value. It states that an attribute of a table cannot hold multiple values. It must hold only single-valued attribute. First normal form disallows the multi-valued attribute, composite attribute, and their combinations. Example: Relation EMPLOYEE is not in 1NF because of multi-valued attribute EMP_PHONE. EMPLOYEE table: EMP.
SQL First, Second and Third Normal Forms - dummie
e its atomicity. Representing an IP address as 10.0.0.1 vs ARRAY[10,0,0,1] vs 167772161 does not matter for 1NF analysis since all three.
From what I've read 2nd normal form seems to relate to composite keys whereas 3rd normal form relates to primary keys. I'm not sure if this is correct though. So 2nd normal form - there's a composite key and all fields in the table must relate to both of the composite key fields. If something doesn't relate then it should be refactored into another table. 3rd normal form - everything has to be.
g Postal codes: USA: 81657.
2nd Normal Form With Example : The data is said to be in 2NF If, 1.It is in First normal form. 2.There should not be any partial dependency of any column on primary key.Means the table have concatanated primary key and each attribute in table depends on that concatanated primary key
1 NF - A relation R is in first normal form (1NF) if and only if all underlying domains contain atomic values only. 2 NF - A relation R is in second normal form (2NF) if and only if it is in 1NF and every non-key attribute is fully dependent on the primary key. 3 NF - A relation R is in third normal form (3NF) if and only if it is in 2NF and every non-key attribute is non-transitively. First Normal Form . A relational table, by definition, is in first normal form. All values of the columns are atomic. That is, they contain no repeating values. Figure1 shows the table FIRST in 1NF. Figure 1: Table in 1NF. Although the table FIRST is in 1NF it contains redundant data. For example, information about the supplier's location and the location's status have to be repeated for every. Normalising Your Database - Second Normal Form (2NF): Now we've looked at normalising a database to 1NF (First Normal Form), we will continue to investigate normalising to Second Normal Form. A table is in first normal form and each non-key field is functionally dependent on the entire primary key. Look for values that occur multiple times in a non-key field
Normal Forms in DBMS - GeeksforGeek
First Normal Form (1NF) When there is no multi-valued attribute present in a relation, then a relation is said to be in 'First Normal Form'. Therefore, a relation that is in 1NF meets all the required properties in relation definition. Important properties are each attribute value must only contain a single value and of the same type, each attribute has unique name. The order is. FIRST NORMAL FORM • In our table 1, we have two violations of first normal form: • First, we have more than one author field, • Second, our subject field contains more than one piece of information. With more than one value in a single field, it would be very difficult to search for all books on a given subject. 8. FIRST NORMAL TABLE • TABLE 2 Title Author ISBN Subject Pages Publisher. For complete DBMS tutorial: http://www.studytonight.com/dbms/In this video, you will learn about the First Normal Form of DBMS. How to design a table which f.. There are more than 3 normal forms but those forms are rarely used and can be ignored without resulting in a non flexible data model. Each normal form constrains the data more than the previous normal form. This means that you must first achieve the first normal form (1NF) in order to be able to achieve the second normal form (2NF). You must. Table in first normal form better than table not in first normal form Table in second normal form better than table in first normal form, and so on Goal: new collection of tables that is free of update anomalies. Functional Dependence. Column B is functionally dependent on column A Each value for A is associated with exactly one value of B A → B A functionally determines B. Candidate key.
The normal form is used to reduce redundancy from the database table. Types of Normal Forms. There are the four types of normal forms: Normal Form Description; 1NF: A relation is in 1NF if it contains an atomic value. 2NF: A relation will be in 2NF if it is in 1NF and all non-key attributes are fully functional dependent on the primary key. 3NF: A relation will be in 3NF if it is in 2NF and no. A relation/table is in the first normal form if it does not contains repeating groups. What is a repeating group? A repeating group is a group of two or more rows/records for an instance of an entity. Video Lecture with full of animations. Example of first normal form. Roll No Name: Marks: 1: Shahzeb: 98: 2: Basit: 90: 3: Sameed: 44: 2: Basit: 70: Here, Roll No 2 is the repeating group because. First normal form but not in second normal form. There are 24 questions to complete. Leave a Reply Cancel reply. Comment. Enter your name or username to comment. Enter your email address to comment. Enter your website URL (optional) Save my name, email, and website in this browser for the next time I comment.. The Customers table in the diagram violates all the three rules of the first normal form. We do not see any Primary Key in the table. The data is not found in its most reduced form. For example. First Normal Form (1NF) Second Normal Form (2NF) Third Normal Form (3NF) Boyce-Codd Normal Form (3.5NF) Fourth Normal Form (4NF) Fifth Normal Form (5NF) Q #3) What is the Purpose of Normalization? Answer: The primary purpose of the normalization is to reduce the data redundancy i.e. the data should only be stored once. This is to avoid any data anomalies that could arise when we attempt to.
First Normal Form (1NF) - GeeksforGeek
ating duplicates in a relational database design.
First Normal Form: Loosely speaking First Normal Form says that each attribute of a Relation must contain one and only one value of the domain. What I would like to convey with this article is that a Relation is by definition in First Normal Form(1NF), while the SQL Table that represents the 'real world' instance(or, a lower level instance) of that Relation may not be in 1NF
Answer: (b). first normal form. 63. The concept in normalization of relations which is based on the full functional dependency is classified as: a. fourth normal form: b. third normal form: c. first normal form: d. second normal form: View Answer Report Discuss Too Difficult! Answer: (d). second normal form. 64. In the tuples, the interpretation of the values of the attribute is considered as.
First Normal Form (1NF) is the most basic normal form of relational database theory. Its purpose is to ensure that the database system has data that it can manipulate in a straightforward manner.
DBMS mcqs with answers set 9 includes the mcqs of relational data integrity, referential integrity, datebase anomalies, types of anomalies, normalization, functional dependency, first normal form, second normal form, third normal form, transitive dependency, fourth normal form and Boyce-Codd normal form (BCNF). These database mcqs are very helpful for those who are preparing UGC NET, GATE exam.
First Normal Form (1NF) - Database Normalizatio
Operations and the Properties on Non-First-Normal-Form Relational Databa ses Created Date: 9/25/1998 3:28:53 P
es the single value of every other attribute in the table. Every tabl
e the following Entity and decide which sets of attributes break the 3rd Normal Form rule: ENTITY: TRAIN ATTRIBUTES: TRAIN ID MAKE DRIVER ID DRIVER NAME DATE OF MANUFACTURE Mark for Review (1) Points TRAIN ID, MAKE DRIVER ID, DRIVER NAME (*
In general, when converting a non-first normal form table to first normal form, the primary key will usually include the original primary key concatenated with the key to the repeating group. The conversion of an unnormalized table to first normal form requires the removal of ____. repeating groups
The requirements to meet second normal form is that the database must be in first normal form and have full functional dependency. Functional Dependency. Functional dependency occurs when all non-key attributes are dependent on the primary key. So if a table has only one primary key, it is fully functional dependent. The figure above does not meet the requirements of second normal form because the non-primary attribute (Item Name) is only dependent on the primary key (Item #). This one table.
Fourth normal form (4NF) is a normal form used in database normalization, in which there are no non-trivial multivalued dependencies except a candidate key. After Boyce-Codd normal form (BCNF), 4NF is the next level of normalization. Although the second, third, and Boyce-Codd normal forms operate with functional dependencies, 4NF is operated with a more universal type of dependency known. If we consider the primary key A to be the far bank of the river and our non-key domain C to be our current location, in order to get to A, our primary key, we need to step on a stepping stone B, another non-key domain, to help us get there. Of course we could jump directly from C to A, but it is easier, and we are less likely to fall in, if we use our stepping stone B. Therefore current location C is transitively dependent on A through our stepping stone B It is in first normal form. It does not have any non-prime attribute that is functionally dependent on any proper subset of any candidate key of the relation. A non-prime attribute of a relation is an attribute that is not a part of any candidate key of the relation first normal form (1NF): only single values are permitted at the intersection of each row and column so there are no repeating groups. normalization: the process of determining how much redundancy exists in a table. second normal form (2NF): the relation must be in 1NF and the PK comprises a single attribut
Techopedia Explains First Normal Form (1NF) The first step in confirming 1NF is modifying multivalued columns to make sure that each column in a table does not take more than one entry. Searching records with duplicate entries is complex. To overcome this situation, all records involved in a relational database table have to be identified by a unique value which will have a seperate column (or attribute). This unique key is called an index key and is used to locate data for retrieval or. There are multiple Non-first-normal form (NF2) query languages now. Two of interest to me are GraphQL (for CRUD) and Gremlin (BI). I'll flip to BI again at some point, I'm guessing, but right now I'm really interested in GraphQL. What I think GraphQL does is what both Chandru (here) and Eggers (texts) have recently mentioned as something they would have liked to have completed -- it includes. First-Normal Form (1NF) With our un-normalised relation now complete we are ready to start the normalisation process. First Normal form is probably the most important step in the normalisation process as it facilities the breaking up of our data into its related data groups, with the following normalised forms fine tuning the relationships between and within the grouped data Normal forms are a property of relations, not R-tables -- a R-table in 1NF is shorthand for consistency with the underlying relation. The redefinition of join in 1970 substituted a single normal form with five (1NF-5NF). It is commonly accepted in the industry that. 1NF is equivalent to the original normal form
First Normal Form (1NF)First Normal Form (1NF) attributes)attributes) itisnotarelationit is not a relation •Fig. 4-2b is in 1st Normal form (but not in a well-structured relation)structured relation) 10. 1NF Example1NF Example Student StudentId StuName CourseId CourseName Grade 100 Mike 112 C++ A 100 Mike 111 Java B 101 Susan 222 Database A 140 Lorenzo 224 Graphics B 11 Practice Exercise. First normal form (1NF), Second normal form (2NF) and the Third Normal Form (3NF) was introduced by Edgar F. Codd, who is also the inventor of the relational model and the concept of normalization. What is 1NF? 1NF is the First normal form, which provides the minimum set of requirements for normalizing a relational database. A table that complies with 1NF assures that it actually represents a. Answer: (a). full functional dependency. 62. The normal form which only includes indivisible values or single atomic values is classified as. a. third normal form. b. first normal form. c. second normal form First Normal Form (1NF) A table is said to be in First Normal Form (1NF) if and only if each attribute of the relation is atomic. That is, Each row in a table should be identified by primary key (a unique column value or group of unique column values) No rows of data should have repeating group of column values
Databases. First normal form 1NF. Examples of tables ..
The first four of these rules—First Normal Form (1NF), Second Normal Form (2NF), Third Normal Form (3NF), and Boyce-Codd Normal Form (BCNF)—provide adequate guidance, in most cases PDF | On Jan 1, 1988, Abdullah Uz Tansel published Non First Normal Form Temporal Relational Model. | Find, read and cite all the research you need on ResearchGat 5. Define first normal form. First normal form is a table that does not consist of any repeating groups. 6. Define second normal form. What types of problems would you find in tables that are not in second normal form? A relation is in second normal form if it is in first normal form and no nonkey attribute Is dependent on only a portion of the primary key
First Normal Form (1 NF) Second Normal Form (2 NF) Third Normal Form (3 NF) A database is considered third normal form if it meets the requirements of the first 3 normal forms. First Normal Form (1NF): The first normal form requires that a table satisfies the following conditions: Rows are not ordered Columns are not ordere First normal form (1NF) Second normal form(2NF) Boyce-Codd normal form (BC-NF) Fourth normal Form (4NF) Fifth normal form (5NF) Remove Multivalued Attributes Figure: 4-22 Steps in Normalization Third normal form (3NF) 4 7 First Normal Form (1NF) • Only atomic attributes (simple, single-value) • A primary key has been identified • Every relation is in 1NF by definition • 1NF example. First normal form excludes variable repeating fields and groups. This is not so much a design guideline as a matter of definition. Relational database theory doesn't deal with records having a variable number of fields. 3 SECOND AND THIRD NORMAL FORMS Second and third normal forms [2, 3, 7] deal with the relationship between non-key and key fields. Under second and third normal forms, a non. An attribute is called non-prime if it is not a prime attribute- that is, if it is not a member of any candidate key. 1) First Normal Form. A relation schema is said to be first normal form (1NF) if it disallows relations within relations or relations as attribute values within tuples. The only attribute values permitted by 1NF are single atomic values Violating first normal form also forces us to embed and maintaint constants we would not otherwise need and to change queries that could otherwise be left alone. These problems can be mitigated with libraries of code that build queries for us, but still those libraries have to be written. In this light, we see can see first normal form as something that creates efficient code and easier-to.
First Normal Form STUDENT 11. Over to you... 12. Second Normal Form A table is in the second normal form if it's in the first normal form AND no column that is not part of the primary key is dependant only a portion of the primary key 13 First Normal Form. A relational table is considered to be in the first normal form from the start. All values of the column are atomic, which means it contains no repeating values. Second Normal Form. The second normal form means that only tables with composite primary keys can be in the first normal form, but not in the second normal form. A relational table is considered in the second normal form if it is in the first normal form and that every non-key column is fully dependent upon the. For second normal form our database should already be in first normal form and every non-key column must depend on entire primary key. Here we can say that our Friend database was already in second normal form l. Why? Because we don't have composite primary key in our friends and favorite artists table. Composite primary keys are- primary keys made up of more than one column. But there is no. A relation is in Third Normal Form if the relation is in First and Second Normal Form and the non-primary key attributes are not transitively dependent upon the primary key. Start Your Free Data Science Course. Hadoop, Data Science, Statistics & others. A super key can be defined as a group of single or multiple keys which will identify the rows of a table. A candidate key is a column or set.
Oracle non first normal form table desig
Such relations are in at least second normal form (2NF). In theoretical terms, second formal form relations are defined as follows: The relation is in first normal form. All non-key attributes are functionally dependent on the entire primary key. The new term in the preceding is functionally dependent, a special relationship between attributes First Normal Form: No Repeating Elements or Groups of Elements. Take a look at rows 2, 3 and 4 on the spreadsheet in Figure A-1. These represent all the data we have for a single invoice (Invoice #125). In database lingo, this group of rows is referred to as a single database row. Never mind the fact that one database row is made up here of three spreadsheet rows: It's an unfortunate ambiguity. What problems are associated with tables (relations) that are not in first normal form, second normal form, or third normal form, along with the mechanism for converting to all three. Describe the problems associated with tables (relations) that are not in fourth normal form and describe the mechanism for converting to fourth normal form. Show More. Show Less. Ask Your Own Programming Question.
First Normal Form(1NF)_Andy的博客-CSDN博�
First Normal Form (1NF) in DBMS. The First Normal Form (1NF) describes the tabular format in which: • All of the key attributes are defined. • There are no repeating groups in the table. In other words, each row/column intersection contains one and only one value, not a set of values. All attributes are dependent on the primary key First Normal Form (1NF) The first normal form simply has to do with making sure that each data field holds a single value, and not a composite value or multiple values. That's fairly easy to understand, looking at a diagram where a data table might, for example, have the following identifiers for table contents — name, phone number, state and country, along with a primary key identifying the record number An entity is in the first normal form if it contains no repeating groups. In relational terms, a table is in the first normal form if it contains no repeating columns. Repeating columns make your data less flexible, waste disk space, and make it more difficult to search for data. In the following telephone directory example, th
What is Normalization? 1NF, 2NF, 3NF, BCNF Database Exampl
Eliminate all hidden dependencies. Eliminate the possibility of a insertion anomalies. Have a composite key. Have all non key fields depend on the whole primary key. View Answer. Answer: A. The relation in second normal form is also in first normal form and no partial dependencies on any column in primary key. Share Me -the table is not in first normal form (1NF) a column is dependent only on a portion of a composite primary key. 26 In an E-R Model a person, place, or thing with characteristics to be stored in the database are referred to as?-entity-row-attribute-file . entity 27 The multi-step process used when creating a new system is referred to as ____. -Systems Development Life Cycle (SDLC) -data mining. 6.3 Convert first-order logic expressions to normal form This section of Logic Topics presents a Prolog program that translates well-formed formulas (wff's) of first-order logic into so-called normal program clauses. The next section of Logic Topics presents a Prolog-like meta-interpreter (in XSB Prolog) for normal programs. Wffs The well-formed formulas will be Prolog terms formed according.
Database - First Normal Form (1NF) - Tutorialspoin
First Normal Form (1NF) sets the very basic rules for an organized database as follows: Eliminate duplicate columns from the same table. Create separate tables for each group of related data and identify each row by using a unique column or set of columns (i.e., primary key). Second Normal Form (2NF) Second Normal Form (2NF) further addresses the concept of removing duplicate data as follows. First normal form does not allow multivolume attribute, composite attribute and their combination. In other word we can say it allow only single scalar value in each column. Example. In this table data is not normalized. It have multi value in subject column. so we need to normalized it. To make this table in first normal form we put only single value in each column. Teacher_info table that.
Database Normalization Explained
FIRST NAME LAST NAME STREET CITY ZIP CODE Mark for Review (1) Points 1st Normal Form. 2nd Normal Form. 3rd Normal Form. (*) None of the above, the entity is fully normalised. Incorrect Incorrect. Refer to Section 6 Lesson 4. Previous Page 2 of 3 Next Summary Test: Section 6 Quiz Review your answers, feedback, and question scores below. An asterisk (*) indicates a correct answer. Section 6 Quiz. The Smith canonical form and a canonical form related to the first natural normal form are of substantial importance in linear control and system theory , . Here one studies systems of equations $ \dot{x} = A x + B u $, $ x \in \mathbf R ^ {n} $, $ u \in \mathbf R ^ {m} $, and the similarity relation is: $ ( A , B ) \sim ( S A S ^ {-1} , S B ) $. A pair of matrices $ A \in \mathbf R ^ {n. dict.cc | Übersetzungen für 'first normal form' im Finnisch-Deutsch-Wörterbuch, mit echten Sprachaufnahmen, Illustrationen, Beugungsformen,.
Cfa level 2 candidate login.
Lufthansa Quartalszahlen Q2 2020.
Notre Dame.
Courier Post International tracking.
Kleingewerbe anmelden Hessen.
Gans bestellen Essen.
BOB Streckennetz.
Actimel Sorten.
Du bist schön Alligatoah bedeutung.
LED vs NDL Grow.
Trauerdanksagungen selbst gestalten.
Mini Camper.
Oberland Jobs Garmisch.
Berichtigung Geburtsurkunde.
Google Travel.
Badaccessoires zum Kleben bauhaus.
Menschliche Organe Deutsch.
Stadt Erftstadt Stellenangebote.
Spanisch Schulbuch Encuentros.
FI/LS 4 polig ABB.
Arbeitssicherheit Unterweisung.
Satirische Texte Klasse 10.
Sempre wetterstation gt ws 09s.
Elternabende.
Isotretinoin Creme.
HWiNFO64 auf Deutsch umstellen.
Walter kohl sohn johannes.
Bau für Wassersportler Kreuzworträtsel.
DayZ Deutsche Server.
Yumbo Gran Canaria Corona.
Dubai PPT.
Punjabi typing.
Poder preterito indefinido.
Daikin Klimaanlage Test.
Fischköppe Friseur Preise.
Kühlmatte Hund giftig.
Wahrzeichen Österreich Volksschule.
365 FRESH.
Cäcilienstraße Köln Parkhaus.
Firefox Lockwise security.
IObit Advanced SystemCare 14. | CommonCrawl |
Why does gravity increase in star formation?
When a star ignites ( ie. fusion starts ), the star maintains its form by balancing gravity's inward pressure, and radiation's outward pressure.
I get that the fusion of hydrogen atoms releases energy... fine...
How does gravity keep it together if the mass is lessening as a result of fusion( mass being converted into energy from fusion) while gravity is weakening( as mass lessens )?
Wouldn't the radiation overpower the force of gravity and tear the star apart?
gravity star-formation radiation
The Sun's luminosity is $3.8\times 10^{26}$ W. Application of mass energy equivalence tells you it loses mass at a rate of 4.25 million tonnes per second as hydrogen turns into helium.
This is practically nothing as far as the structure of the star goes. Over its lifetime, the Sun has lost about 0.03% of its mass in this way.
Radiation pressure is a feature in stellar evolution calculations. It is almost negligible in the solar interior (at the 1% level compared with thermal pressure). However, it does become more important in more massive stars with hotter interiors and higher luminosities.
I am going to start with this paragraph from Wikipedia (emphasis mine):
The most important fusion process in nature is the one that powers stars. In the 20th century, it was realized that the energy released from nuclear fusion reactions accounted for the longevity of the Sun and other stars as a source of heat and light. The fusion of nuclei in a star, starting from its initial hydrogen and helium abundance, provides that energy and synthesizes new nuclei as a byproduct of that fusion process. The prime energy producer in the Sun is the fusion of hydrogen to form helium, which occurs at a solar-core temperature of 14 million kelvin. The net result is the fusion of four protons into one alpha particle, with the release of two positrons, two neutrinos (which changes two of the protons into neutrons), and energy. Different reaction chains are involved, depending on the mass of the star. For stars the size of the sun or smaller, the proton-proton chain dominates. In heavier stars, the CNO cycle is more important.
The proton-proton chain set of reactions look like this:
The CNO cycle looks like this:
Net Result
Either way, the net result is 4 protons ($^1\!$H nuclei) are turned into 1 alpha particle ($^4\!$He nucleus) plus 2 positrons (e$^+$). The 2 positrons go on to annihilate 2 electrons, so altogether we have a mass change of $$ \Delta M = M_{\mathsf \alpha} - 2M_{\mathsf e} - 4M_{\mathsf P}\,. $$
Let's find out the fractional change in mass: $$ f_\Delta = \frac{\Delta M}{4M_{\mathsf P}} = \frac{M_{\mathsf \alpha} - 2M_{\mathsf e} - 4M_{\mathsf P}}{4M_{\mathsf P}}\,. $$
Now the ratio of the mass of an alpha particle to a proton is $3.9726$, or $$ M_{\mathsf \alpha} = 3.9726\times M_{\mathsf P}\,. $$
The ratio of the mass of a proton to an electron is $1836.1$, or $$ M_{\mathsf e} = \frac{M_{\mathsf P}}{1836.1} = 0.0005446\times M_{\mathsf P}\,. $$
Substituting into the $f_\Delta$ equation, $$ f_\Delta = \frac{3.9726\times M_{\mathsf P} - 0.0011\times M_{\mathsf P} - 4\times M_{\mathsf P}}{4\times M_{\mathsf P}} = \frac{-0.0285}4 = -0.007125 = -0.7125\%\,\,.$$
So obviously, Even if all of the hydrogen were converted (only a fraction actually is) the loss of mass to the star would be too negligible to matter.
A more important mass loss for large stars is that from their stellar wind, which for very large main sequence stars (types O or B) removes a sizable fraction of the very large star's mass over it's lifetime.
edited Nov 7 '15 at 20:18
Eubie DrewEubie Drew
$\begingroup$ @Aabaakawad- Well done $\endgroup$
$\begingroup$ Does this also explain the flickering of a star? $\endgroup$
$\begingroup$ @Mr.Cruz see astronomy.stackexchange.com/questions/222/… $\endgroup$
– Eubie Drew
Some good answers, I'm going to give kind of summary, cause you touched on a few points.
Why does gravity increase in star formation
Gravitation is a product of a few forces. Mass, density and, not to be ignored, rotation speed.
It's not actually the fusion process that keeps the sun from contracting, at least, not directly. It's heat that keeps the star expanded. That's the balancing act. High temperature wants to expand, gravity wants to contract.
The fusion process is actually pretty slow, which is why stars like our sun have a main sequence of about 10 billion years, and a lot of the heat that a star starts out with is from the heat of formation. Potential energy gets converted to heat due to the coalescing and condensing of all that matter so stars start out hot, even before fusion begins.
In fact, a star in formation can be many times brighter than the star is during it's main sequence due to the high heat of formation. Here's an article that says the forming sun was 200 times brighter than it is now.
Young proto-stars, as a result of conservation of angular momentum, tend to rotate very fast and that fast rotation can create a bulge and increases ejection of matter. The formation process is pretty chaotic compared to the main sequence stage. Lots of ejected matter, much bigger solar storms, lots of lheat from formation, etc.
Once the main sequence stage is underway and rotation is slowed down, then there's more of a balance between heat and gravity mentioned above. The fusion process continues to add heat to the core of star which the star, convects or conduct heat away from the core into the outer layers and then, radiates from it's surface, but during the main sequence, in general, the core of the star gradually heats up and in most cases, the energy added from fusion isn't nearly strong enough to blow apart the star, unless the star is enormously large like over 150 or 200 solar masses, then the star doesn't really work without blowing off a bunch of matter. See: here.
As others have said, mass loss by solar wind is a bigger factor especially for young and smaller stars, but there's a few factors at play. The short answer to this question is that the mass loss, at least by fusion, is quite very compared to the total mass of the star. Another factor, as hydrogen becomes helium, the core of the star becomes denser and greater density tends to be smaller and that increases gravity, but there are competing factors. The inner core grows denser as it becomes more hydrogen rich and the fusion tends to expand outwards on the outside of the helium core, so a star like our sun gets a denser inner core over time, but the layers around the core can grow hotter and larger, even as they lose mass.
As mentioned above, this happens if you have 150 or 200 solar masses. lower mass stars, the fusion isn't nearly powerful enough to blow the star apart. Stars and white dwarfs blow apart when they go supernova, but that's different than the main sequence fusion process.
Our sun will blow off some of it's matter when it has it's helium flash, so there are examples of what you're describing happening, but not during the main sequence for stars like our sun when material is expelled primarily by magnetic storms causing coronal mass ejections. Fusion is, generally speaking, more like a slow burn, than a big explosion when it's up against the enormous gravitational binding energy of a star.
userLTKuserLTK
$\begingroup$ @RobJeffries It has something to do with it. Certainly type1. The Oxygen and Carbon converts to heavier elements and that creates a lot of energy. An Iron white dwarf collapse without any energy from fusion might look quite different. But, I'll re-word that section. $\endgroup$
– userLTK
$\begingroup$ What mass is the Sun expected to lose as a result of the He flash? $\endgroup$
$\begingroup$ Are you quizzing me? I don't know the specific numbers and it would vary with the size of the star. The helium flash is associated with the formation of the planetary nebula. universetoday.com/25669/the-sun-as-a-white-dwarf-star our sun is expected to lose about half it's mass, though some of that probably happens before the Helium flash. I should probably change "A lot" to "some of", that's probably more accurate. $\endgroup$
$\begingroup$ I'm quizzing you because there is not expected to be any major mass loss episode associated with the He flash. In fact quite the opposite. As the star ascends the giant branch (H-shell burning), it loses some mass (not a lot compared with the AGB phase). The He flash, terminates the red giant ascent, and is accompanied by a reduction in the size of the star, higher surface gravity and less wind. The He flash has nothing to do with planetary nebula formation. $\endgroup$
Here's the bare bones reason for stars like our sun. The full story is much more...full.
Expansion means cooling. Cooling means less fusion. Less fusion means less energy driving expansion, meaning the outward pressure is going down. Eventually gravity is pulling inward more strongly than radiation is pushing outward. So the material collapses again. Collapsing means heating. Heating means more fusion. More fusion means more radiation pushing outward on the star. Produce enough energy, and you'll overcome gravity and the star expands.
Rinse and repeat.
The star naturally sits at an equilibrium where gravity and radiation balance each other. Deviations from this are self-correcting.
zibadawa timmyzibadawa timmy
As a star runs out of hydrogen fuel, the fusion slows, causing the gravity to overpower the outward force of pressure, thus contracting. Contraction of the star causes high temperature and pressure, to the extent that it is enough to fuse helium into carbon, then the energy released is stronger than the gravity, increasing the size of the star into a red giant. The following paragraph from an article explains this:
Over its life, the outward pressure of fusion has balanced against the inward pressure of gravity. Once the fusion stops, gravity takes the lead and compresses the star smaller and tighter. Temperatures increase with the contraction, eventually reaching levels where helium is able to fuse into carbon. Depending on the mass of the star, the helium burning might be gradual or might begin with an explosive flash. The energy produced by the helium fusion causes the star to expand outward to many times its original size.
The amount of mass lost is more due to stellar wind, rather than to fusion. To answer your second question, the pressure will never overcome completely the force of gravity. When a star reaches the iron nickel stage of fusing, it'll stop, unable to go further. This causes a rapid contraction for either a star to go supernova (which is really tearing most of the star apart except for its core), or cool down to a black dwarf.
CipherBotCipherBot
$\begingroup$ The fusion process tends to speed up as the star burns, this is cause, as it adds heat, the speed of the nuclei increases and that increases the number of interactions. It's kind of counter-intuitive, but as our sun burns hydrogen it increases it's hydrogen fusion rate, until the hydrogen is nearly run out that is. $\endgroup$
Not the answer you're looking for? Browse other questions tagged gravity star-formation radiation or ask your own question.
Is there a theoretical maximum size limit for a star?
Why do stars appear to twinkle?
Is there a natural process by which hydrogen is generated from heavier elements in the cosmos?
What is star composition after formation
Star formation analogy
Sun's formation as "part of a star-forming-cluster..."?
How strong would gravity need to be to prevent star formation
Star formation around rotating black holes?
Why does star formation cease? | CommonCrawl |
Subsets and Splits